rathrio / lists Goto Github PK
View Code? Open in Web Editor NEWTracking recommendations (movies, TV shows, games, books, ...)
Home Page: https://lists.rathr.io
Tracking recommendations (movies, TV shows, games, books, ...)
Home Page: https://lists.rathr.io
A collection is either a list of very specific items, or a specific filter (i.e. a dynamic list). I want to use collections whenever tags don't make sense, e.g. "My favorite horror films of 2019".
Below an example of the situation: I incorrectly was searching for He Men (I should have searched for He Man).
Suggestion: Apply a distance measure between every retrieved suggestion and the initial (user provided) search word and then only list suggestions that exhibit a sufficient small distance value. A handy measure for the word difference would be the so called Levenshtein distance.
Items have many notes. Allow users to edit them (or just one of them, maybe we don't need many notes) on the item's edit page.
Related to #14.
When searching for an item, automatically display some scraper search results and allow them to be imported.
So that I can finally use the safe navigation stuff.
Instead of rendering a form when clicking on an item, show a detail page containing the following things for now:
This is a proposal that describes an idea how we could define scrapers using a DSL.
The DSL I am thinking of would exhibit two makros, rely_on
and scrape_attribute
.
# Fetch the results provided by the target client type. The result
# Is built by applying the given (optional) block on the client's output.
# Moreover, it defines an accessor to an instance of type client.
# This can be used by the scraper instance
#
# @param client [Symbol] name of client class we the scrapper should rely on.
# @param block [Proc] defines how the result object is formed
Scraper#rely_on (client, &block)
# @param attribute_name [Symbol] identifier of a member in result hash
# @param block [Proc] defines how an attrubute should be parsed
Scraper#scrap_attribute (attribute_name, &block)
In order to define a new scraper, the following 3 steps would be required:
In the following an example how the definition of TvScraper is supposed to look like when using relying on the proposed DSL:
class TvScraper
include Scraper
rely_on :move_client do
title = query.gsub(/\((\d{2,4})\)/, '').strip
moviedb_client.search(title, type: :tv, year: $1)['results'].to_a
end
scrape_attribute :name
scrape_attribute :description do |result|
result['overview']
end
scrape_attribute :image do |result|
MoviedbClient::IMAGE_BASE_URI + result['poster_path'] if result['poster_path']
end
scrape_attribute :date do |result|
Date.parse(result['first_air_date']) if result['first_air_date'].present?
end
scrape_attribute :trags do |result|
result['genre_ids'].map { |id| MoviedbClient::GENRES[id] }.compact
end
end
IGDB is no longer free. Find an alternative.
Consider using Zeitwerk by default.
On the settings page in the tag tab, clicking on a tag should render a form to edit the tag. (Same behaviour as on the labels settings page)
Currently, one can filter items by their name using the search bar. Add possibility to filter items by:
For js and css style checking.
When soft-deleting an item, its links and notes are hard-deleted. Make them paranoid as well.
Yet, I haven't noticed any pattern but it seems like I will get signed out after an idle time of approx. 3 minutes.
Install Let's Encrypt SSL certificate on server. See this guide.
E.g. "YMS", "Single", "Oscar", ...
This should probably be a polymorphic list, i.e. it should accept simple strings, such as "YMS", but also references to other users of lists, or URLs, or whatever, (or both probably. say I want to remember that this was recommended to me by YMS in this youtube video ...)
Some APIs are really slow, or require separate request for scraping genres for instance.
In some cases, we need scrapers to perform really well, such as when listing search results. Provide an option to skip scraping of attributes for speedup.
GameScraper.new(query: 'kingdom hearts', scrape_tags: false)
Currently, one is only able to set a remote image url in the item form. Provide a more standard way to update item images.
I am someone who is watching multiple series in parallel. Sometimes, I stop watching a particular series for quiet some time (for reasons). Untfortunately, I tend to forget where I left off watching.
Users should be able to make a note (in a TV list item) where they left off (watching) a series.
Some scrapers create Link
records for items. Show them on the item form and allow them to be edited.
These could be youtube links for albums, links to the IGDB details page for a game, magnet links for movies etc.
Mention which APIs are used to scrape data.
Where does that make the most sense?
Does it make sense that items can have many labels, i.e., can occur in many lists?
The usecase I initially thought of was having some sort of inbox/starred list and that would have just been another label to assign.
I think it'd simplify things to just make items belong to just one category and solve any other nuances with tags or another attribute.
How does one write a new scraper/client?
The displayed name should be context sensitive, e.g. author for books, studio for games, director for film, etc...
There might be no performance boost at all (memory usage might even get worse), but I'd like to experiment with this
In order to comprehend when one has watched a film. Which films did I watch last month?
done_at
would probably suffice. This would enable aggregations such as "Done in 2019".
Introduce a JSON REST / GraphQL API for native clients / fancier frontends.
Results in a cleaner more minimal UI.
Currently, scraping is done in a background process via sidekiq.
Approaches:
items
. While job is still running, show a spinner in the item box. When job is done, update the item box.e.g. "Korea (KOR)" or "Switzerland (CH)".
I need a way to filter for items with a specific original language, e.g., show me all Korean language films.
Users should be able to sort (asc and desc) their items by:
This is a prerequisite for #5.
What's the scope for item ranks? The default case is that items have one label assigned to them. So it'd make the most sense to require a unique rank within one (the first) label.
We should probably disallow reordering when items of multiple labels are displayed.
Any smarter approaches for multi-label items? Should we get rid of multiple labels all together?
This should enable me to move away from Google to star/favorite places.
E.g. year=2000,2012
should show all items with year 2000 or 2012.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.