Giter Club home page Giter Club logo

hoarder-app's Introduction

A self-hostable bookmark-everything app with a touch of AI for the data hoarders out there.

homepage screenshot

Features

  • 🔗 Bookmark links, take simple notes and store images.
  • ⬇️ Automatic fetching for link titles, descriptions and images.
  • 📋 Sort your bookmarks into lists.
  • 🔎 Full text search of all the content stored.
  • ✨ AI-based (aka chatgpt) automatic tagging. With supports for local models using ollama!
  • 🔖 Chrome plugin and Firefox addon for quick bookmarking.
  • 📱 An iOS app that's pending apple's review (currently beta testing), and an Android app.
  • 🌙 Dark mode support (web only so far).
  • 💾 Self-hosting first.
  • [Planned] Downloading the content for offline reading.

⚠️ This app is under heavy development and it's far from stable.

Documentation

Demo

You can access the demo at https://try.hoarder.app. Login with the following creds:

email: [email protected]
password: demodemo

The demo is seeded with some content, but it's in read-only mode to prevent abuse.

Stack

  • NextJS for the web app. Using app router.
  • Drizzle for the database and its migrations.
  • NextAuth for authentication.
  • tRPC for client->server communication.
  • Puppeteer for crawling the bookmarks.
  • OpenAI because AI is so hot right now.
  • BullMQ for scheduling the background jobs.
  • Meilisearch for the full content search.

Why did I build it?

I browse reddit, twitter and hackernews a lot from my phone. I frequently find interesting stuff (articles, tools, etc) that I'd like to bookmark and read later when I'm in front of a laptop. Typical read-it-later apps usecase. Initially, I was using Pocket for that. Then I got into self-hosting and I wanted to self-host this usecase. I used memos for those quick notes and I loved it but it was lacking some features that I found important for that usecase such as link previews and automatic tagging (more on that in the next section).

I'm a systems engineer in my day job (and have been for the past 7 years). I didn't want to get too detached from the web development world. I decided to build this app as a way to keep my hand dirty with web development, and at the same time, build something that I care about and use every day.

Alternatives

  • memos: I love memos. I have it running on my home server and it's one of my most used self-hosted apps. It doesn't, however, archive or preview the links shared in it. It's just that I dump a lot of links there and I'd have loved if I'd be able to figure which link is that by just looking at my timeline. Also, given the variety of things I dump there, I'd have loved if it does some sort of automatic tagging for what I save there. This is exactly the usecase that I'm trying to tackle with Hoarder.
  • mymind: Mymind is the closest alternative to this project and from where I drew a lot of inspirations. It's a commercial product though.
  • raindrop: A polished open source bookmark manager that supports links, images and files. It's not self-hostable though.
  • Bookmark managers (mostly focused on bookmarking links):
    • Pocket: Pocket is what hooked me into the whole idea of read-it-later apps. I used it a lot. However, I recently got into home-labbing and became obsessed with the idea of running my services in my home server. Hoarder is meant to be a self-hosting first app.
    • Linkwarden: An open-source self-hostable bookmark manager that I ran for a bit in my homelab. It's focused mostly on links and supports collaborative collections.
    • Omnivore: Omnivore is pretty cool open source read-it-later app. Unfortunately, it's heavily dependent on google cloud infra which makes self-hosting it quite hard. They published a blog post on how to run a minimal omnivore but it was lacking a lot of stuff. Self-hosting doesn't really seem to be a high priority for them, and that's something I care about, so I decided to build an alternative.
    • Wallabag: Wallabag is a well-established open source read-it-later app written in php and I think it's the common recommendation on reddit for such apps. To be honest, I didn't give it a real shot, and the UI just felt a bit dated for my liking. Honestly, it's probably much more stable and feature complete than this app, but where's the fun in that?
    • Shiori: Shiori is meant to be an open source pocket clone written in Go. It ticks all the marks but doesn't have my super sophisticated AI-based tagging. (JK, I only found about it after I decided to build my own app, so here we are 🤷).

Star History

Star History Chart

hoarder-app's People

Contributors

ahmadmuj avatar cedmax avatar lbrame avatar mohamedbassem avatar rosin1 avatar vivekmiyani avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

hoarder-app's Issues

feature request: live updating

when adding bookmarks from the chrome extension, it would be nice for the app to auto-update to changes so you dont need to refresh

Wrong CORS Headers with Cosmos

Hello,
I set up Hoarder behind cosmos using the latest version, and the service is working fine using OpenRouter too for free inference.
But, when trying to use the chrome extension I get a "Failed to fetch" error. Looking at the cosmos logs, I see:

2024/03/30 15:48:44 "OPTIONS https://***.xyz/api/trpc/apiKeys HTTP/2.0" from 192.168.1.1:40518 - 204 0B in 3.163343ms

when I try to xh the same link I get:

HTTP/2.0 404 Not Found
access-control-allow-credentials: true
access-control-allow-credentials: true
access-control-allow-headers: Content-Type, Authorization
access-control-allow-methods: GET, POST, PUT, DELETE, OPTIONS
access-control-allow-origin: *.xyz
access-control-allow-origin: *
content-security-policy: frame-ancestors 'self'
content-type: application/json
date: Sat, 30 Mar 2024 15:59:28 GMT
strict-transport-security: max-age=31536000; includeSubDomains
vary: RSC, Next-Router-State-Tree, Next-Router-Prefetch, Next-Url
x-content-type-options: nosniff
x-ratelimit-limit: 6000
x-ratelimit-remaining: 86
x-ratelimit-reset: 1711815568
x-served-by-cosmos: 1
x-timeout-duration: 4h0m0s
x-xss-protection: 1; mode=block

{
    "error": {
        "json": {
            "message": "No \"query\"-procedure on path \"apiKeys\"",
            "code": -32004,
            "data": {
                "code": "NOT_FOUND",
                "httpStatus": 404,
                "path": "apiKeys",
                "zodError": null
            }
        }
    }
}

and in the hoarder-web logs I have:

[next-auth][warn][NEXTAUTH_URL] 
https://next-auth.js.org/warnings#nextauth_url
s [TRPCError]: No "query"-procedure on path "apiKeys"
    at m (/app/apps/web/.next/server/chunks/673.js:4826:4202)
    at /app/apps/web/.next/server/app/api/trpc/[trpc]/route.js:1:4251
    at Array.map (<anonymous>)
    at g (/app/apps/web/.next/server/app/api/trpc/[trpc]/route.js:1:4185)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
  code: 'NOT_FOUND',
  [cause]: undefined
}

The full docker compose for the web app (hiding envs) is:

{
  "services": {
    "Hoarder-WEB": {
      "container_name": "Hoarder-WEB",
      "image": "ghcr.io/mohamedbassem/hoarder-web:latest",
      "environment": [
        ***
      ],
      "labels": {
        "cosmos-auto-update": "true",
        "cosmos-force-network-mode": "cosmos-web-default",
        "cosmos.stack": "web",
        "cosmos.stack.main": "true"
      },
      "ports": [
        "0.0.0.0:8096:3000/tcp",
        ":::8096:3000/tcp"
      ],
      "volumes": [
        {
          "Type": "bind",
          "Source": "/volume1/docker/hoarder/data",
          "Target": "/data"
        }
      ],
      "networks": {
        "cosmos-web-default": {},
        "hoarder": {}
      },
      "routes": null,
      "restart": "on-failure",
      "devices": null,
      "expose": [],
      "depends_on": [],
      "command": "/bin/sh -c (cd /db_migrations && node index.js) && node server.js",
      "entrypoint": "docker-entrypoint.sh",
      "working_dir": "/app/apps/web",
      "user": "root",
      "hostname": "26308f0b7bf2",
      "network_mode": "cosmos-web-default",
      "healthcheck": {
        "test": null,
        "interval": 0,
        "timeout": 0,
        "retries": 0,
        "start_period": 0
      }
    }
  }
}

Signing out falling back on localhost:port

Hi,

I've got Hoarder up and running on my virtual machine running docker. I only access it through IP:PORT.
When signing out, it tries to go back on LOCALHOST:PORT, which won't work.

I've tried adding the API_URL value in the docker-compose.yml file, but no luck.

Other than that: very interesting app, looking forward to using it and seeing it devlop!

Thanks!

logout redirects to localhost:3000

As the title says, when I logout of Hoarder, it redirects users to localhost:3000. Is there somewhere where this default redirection address can be changed?

Server address with / breaks extension configuration

When you use a Server Address ending with a slash (https://sub.domain.tld/) the extension shows Failed to fetch.

Dev tools logs:

Access to fetch at 'https://sub.domain.tld//api/trpc/apiKeys.exchange?batch=1' from origin 'chrome-extension://kgcjekpmcjjogibpjebkhaanilehneje' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: Redirect is not allowed for a preflight request.

Removing the slash fixes the issue.

How do i sign up?

I installed Hoarder on my Laptop for test purposes, and now i wonder how i can access it.

image uploading doesn't appear to work

when i paste or drag an image into the "save" box, nothing happens. no console errors. chrome on mac

EDIT: i see it works from mac screenshot thing, but not from copy/paste or dragging from chrome. how is this feature intended to be used?

DISABLE_SIGNUPS not working

I set the .env variable to DISABLE_SIGNUPS=true but it didn't disable the option.

I ran docker compose up -s again after saving the .env already.

MeiliSearchApiError: The Authorization header is missing. It must use the bearer authorization method.

I have this error inside the Hoarder-WORKERS container:
openai job failed: MeiliSearchApiError: The Authorization header is missing. It must use the bearer authorization method.

And i get this error inside the Hoarder-MEILI container:

MeiliSearchApiError: The Authorization header is missing. It must use the bearer authorization method.
    at /app/apps/web/.next/server/chunks/673.js:4817:1188
    ... 2 lines matching cause stack trace ...
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
  code: INTERNAL_SERVER_ERROR',
  name: 'TRPCError',
  [cause]: l [MeiliSearchApiError]: The Authorization header is missing. It must use the bearer authorization method.
      at /app/apps/web/.next/server/chunks/673.js:4817:1188
      at Generator.next (<anonymous>)
      at s (/app/apps/web/.next/server/chunks/673.js:4815:69399)
      at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
    code: 'missing_authorization_header',
    type: 'auth',
    link: 'https://docs.meilisearch.com/errors#missing_authorization_header',
    httpStatus: 401
  }
}

[Feature Request] OpenAI Tag Cleaning

Just a couple of ideas to remove the cloud of tags that appears with OpenAI tagging.

Would it be possible to have OpenAI look at existing tags and and choose from the most relevant ones? Perhaps this could be done more efficiently with an embeddings search on the documents and/or on the existing tags to find similar ones.

Another Idea could be to set a "number of tags to generate" with OpenAI so it could settle for an existing general one vs. something new and hyper specific.

Eg. This link generated some pretty silly tags in my opinion

image

Invalid URL

NOTE: not a high priority issue, just reporting what I'm seeing.

Upon adding a new bookmark it failed to crawl the site with the message: {"message":"INVALID_URL, Need to provide a valid URL.","code":"INVALID_URL","description":"Need to provide a valid URL."}

The url was https://www.healthline.com/health/fitness-exercise/plantar-fasciitis-stretches#What-is-plantar-fasciitis?

Install documentation: What do I do with HOARDER_VERSION=release? What release is docker going to give me when I run it?

Please explain what is supposed to populate in HOARDER_VERSION. You don't mention that here and it's not in the links there.... am I supposed to guess what it is? Should I guess that it supposed to be "release?" Should I guess that I go to the git and grab the current version? If the last, am I to change that every time docker changes the version? This is confusing and thus frustrating...

https://docs.hoarder.app/installation

3. Populate the environment variables

To configure the app, create a .env file in the directory and add this minimal env file:

NEXTAUTH_SECRET=super_random_string
HOARDER_VERSION=release
MEILI_MASTER_KEY=another_random_string

You should change the random strings. You can use openssl rand -base64 36 to generate the random strings.

Persistent storage and the wiring between the different services is already taken care of in the docker compose file.

Keep in mind that every time you change the .env file, you'll need to re-run docker compose up.

If you want more config params, check the config documentation here.

Feature Request: REST API

This is really great software, I love the clear motivation and where it fits into the ecosystem.

I'd be lovely to get an API for some common actions, primarily retrieving content, searching, and addition from outside the API.

Use case:

Handle websites with cookies better

Is it possible to handle websites with an overlay for cookies better?
Many websites i add is shows no thumbnail and shows as the text only the cookies informations.
grafik

Android APP.

Really like your tool.

I was able to add ollama for AI work, but I'm a bit lost for Android.

How to use the bookmarking tool/app in android?

Thank you.

[Feature Request] Publish the android app

The mobile app is already cross-platform. We just need to test and publish it to the play store.

EDIT2:

The app is now live on the play store: https://play.google.com/store/apps/details?id=app.hoarder.hoardermobile&pcampaignid=web_share

EDIT:

Ok, so google just accepting the app for closed testing!

To join the closed testing group, you'll need to join this google group: https://groups.google.com/g/hoarder-android-testers

Once you've joined (I guess you might need to give it a minute or two), you can use the following link to install the app from the app store:

We will need at least 20 testers, for 2 weeks for the app to get listed on the play store. Please give it a try and let me know how it goes!

Feature request: undelete

ummm... there needs to be an Undelete link here!! I.e., The bookmark you didn't mean to delete has been deleted! Undelete?

I looked at the screen shot again: The bookmark has been deleted! Undelete? would be good too.

Screenshot from 2024-03-27 20-22-02

Couple other things... and they may exist, don't know yet, but audiobookshelf has a backup system where you can keep n number of days of backups of the database. If you don't have that, then that would be highly desirable.

Is there a log of the url I deleted? I have no idea what it was. I hope it wasn't the one about cleaning mortar off bricks in new zeland!

Crawling job failed: {"code":"SQLITE_ERROR"}

Hi just got this setup with Ollama for inference. I'm seeing the above error in the workers container when I click the "Recrawl All Links" button on the admin page. Also I get this series of errors adding a note:

2024-03-30T18:38:18.740Z error: [inference][3] inference job failed: SqliteError: no such table: bookmarks
2024-03-30T18:38:18.742Z error: Something went wrong when marking the tagging status: SqliteError: no such table: bookmarks
2024-03-30T18:38:18.760Z error: [search][5] openai job failed: SqliteError: no such table: bookmarks
2024-03-30T18:38:19.245Z error: [inference][3] inference job failed: SqliteError: no such table: bookmarks
2024-03-30T18:38:19.246Z error: Something went wrong when marking the tagging status: SqliteError: no such table: bookmarks
2024-03-30T18:38:19.851Z error: [search][5] openai job failed: SqliteError: no such table: bookmarks
2024-03-30T18:38:20.250Z error: [inference][3] inference job failed: SqliteError: no such table: bookmarks
2024-03-30T18:38:20.250Z error: Something went wrong when marking the tagging status: SqliteError: no such table: bookmarks
2024-03-30T18:38:21.956Z error: [search][5] openai job failed: SqliteError: no such table: bookmarks
2024-03-30T18:38:25.971Z error: [search][5] openai job failed: SqliteError: no such table: bookmarks
2024-03-30T18:38:33.996Z error: [search][5] openai job failed: SqliteError: no such table: bookmarks

Here is my docker-compose file:

version: "3.8"
services:
  web:
    image: ghcr.io/mohamedbassem/hoarder-web:${HOARDER_VERSION:-release}
    restart: unless-stopped
    volumes:
      - /docker-data/hoarder/data:/data
    ports:
      - 81:3000
    env_file:
      - stackstack.env
    environment:
      REDIS_HOST: redis
      MEILI_ADDR: http://meilisearch:7700
      DATA_DIR: /data
  redis:
    image: redis:7.2-alpine
    restart: unless-stopped
    volumes:
      - /docker-data/hoarder/redis:/data
  chrome:
    image: gcr.io/zenika-hub/alpine-chrome:100
    restart: unless-stopped
    command:
      - --no-sandbox
      - --disable-gpu
      - --remote-debugging-address=0.0.0.0
      - --remote-debugging-port=9222
  meilisearch:
    image: getmeili/meilisearch:v1.6
    restart: unless-stopped
    env_file:
      - stack.env
    volumes:
      - /docker-data/hoarder/meilisearch:/meili_data
  workers:
    image: ghcr.io/mohamedbassem/hoarder-workers:${HOARDER_VERSION:-release}
    restart: unless-stopped
    volumes:
      - /docker-data/hoarder/workers:/data
    env_file:
      - stack.env
    environment:
      REDIS_HOST: redis
      MEILI_ADDR: http://meilisearch:7700
      BROWSER_WEB_URL: http://chrome:9222
      DATA_DIR: /data
      # OPENAI_API_KEY: ...
    depends_on:
      web:
        condition: service_started

volumes:
  redis:
  meilisearch:
  data:

Error: Something went wrong

I'm attempting to add a text bookmark (markdown specifically) and I keep getting a red warning at the bottom of the screen saying "something went wrong". Which docker log should I look in for additional logging messages?

My text if it helps, just my own notes from doing cloudflare dns.


Cloudflared DNS

https://docs.pi-hole.net/guides/dns/cloudflared/

https://serverfault.com/questions/956447/bind-set-port-for-forwarders

AMD64 architecture (most devices)¶
Download the installer package, then use apt-get to install the package along with any dependencies. Proceed to run the binary with the -v flag to check it is all working:

For Debian/Ubuntu

wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb
sudo apt-get install ./cloudflared-linux-amd64.deb
cloudflared -v

Configuring cloudflared to run on startup¶
Create a cloudflared user to run the daemon:

sudo useradd -s /usr/sbin/nologin -r -M cloudflared
Proceed to create a configuration file for cloudflared:

sudo nano /etc/default/cloudflared
Edit configuration file by copying the following in to /etc/default/cloudflared. This file contains the command-line options that get passed to cloudflared on startup:

# Commandline args for cloudflared, using Cloudflare DNS
CLOUDFLARED_OPTS=--port 5053 --upstream https://1.1.1.1/dns-query --upstream https://1.0.0.1/dns-query

Update the permissions for the configuration file and cloudflared binary to allow access for the cloudflared user:

sudo chown cloudflared:cloudflared /etc/default/cloudflared
sudo chown cloudflared:cloudflared /usr/local/bin/cloudflared

Then create the systemd script by copying the following into /etc/systemd/system/cloudflared.service. This will control the running of the service and allow it to run on startup:

sudo nano /etc/systemd/system/cloudflared.service

[Unit]
Description=cloudflared DNS over HTTPS proxy
After=syslog.target network-online.target

[Service]
Type=simple
User=cloudflared
EnvironmentFile=/etc/default/cloudflared
ExecStart=/usr/local/bin/cloudflared proxy-dns $CLOUDFLARED_OPTS
Restart=on-failure
RestartSec=10
KillMode=process

[Install]
WantedBy=multi-user.target

Enable the systemd service to run on startup, then start the service and check its status:

sudo systemctl enable cloudflared
sudo systemctl start cloudflared
sudo systemctl status cloudflared

Now test that it is working! Run the following dig command, a response should be returned similar to the one below:

pi@raspberrypi:~ $ dig @127.0.0.1 -p 5053 google.com

; <<>> DiG 9.11.5-P4-5.1-Raspbian <<>> @127.0.0.1 -p 5053 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12157
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 22179adb227cd67b (echoed)
;; QUESTION SECTION:
;google.com.                    IN      A

;; ANSWER SECTION:
google.com.             191     IN      A       172.217.22.14

;; Query time: 0 msec
;; SERVER: 127.0.0.1#5053(127.0.0.1)
;; WHEN: Wed Dec 04 09:29:50 EET 2019
;; MSG SIZE  rcvd: 77

Asset too Big

Testing out this app on my homelab. Testing uploads of various images and i ran into an issue with a 17MB PNG File.

image

Request for HTTP and SOCKS5 Proxy Support

Hi,

Could you add support for HTTP and SOCKS5 proxies?

Many users, including myself, need proxies to access sites like Google and Facebook due to local internet restrictions.
This feature would help a lot.

Thanks,

Consider using a named .env file

MacOS doesn't like empty named .env files, so when running the docker script I needed to give a full name and modify the compose file to reflect a hoarder.env file

feature request: pdf support

I'd like to be able to capture a pdf url, eg https://gavinadair.files.wordpress.com/2017/03/baker-changes-of-mind.pdf

currently, it is captured but no tags or added nor is text extracted
image

logs:

hoarder-workers 2024-03-27T16:53:27.624Z info: [Crawler][9] Will crawl "https://gavinadair.files.wordpress.com/2017/03/baker-changes-of-mind.pdf" for link with id "h03n4dihn2gp0kn8giwiyir7"                                                                                           hoarder-workers 2024-03-27T16:53:27.813Z info: [search][30] Completed successfully                                                                                                                                                                                                      hoarder-workers 2024-03-27T16:53:27.822Z error: [Crawler][9] Crawling job failed: {}                               

Column width on mobile web changes when using search

When search is selected or characters are entered in the search box the padding on the sides of the column are reduced.

It’s actually better in the search mode, so applying that style to the main view and reducing the padding would be appropriate.

Unable to manually edit title

Great repo BTW! Thank you!

Once I've added my bookmark, I'd like to manually edit the title of the item. Yes, the scraper does grab the title text from the metadata but I'd like to change it myself.

Is this possible?

Also, is it possible to import my list of bookmarks from Chrome or another browser?

Feature request: Checklists ✅

First off - looks amazing and I'm looking forward trying to replace a couple of my apps with this. (Wallabag, Google Keep, Potentially Plainpad, and some TidddlyWiki use cases)

If I could request one feature then it would be a checklist feature.

Haven't tested the Android app, but would be neat to have also the app in the share dialog to save links/notes.

Good luck with the project, it already looks great! :)

Firefox Plugin - NetworkError on Sign in

When I try to connect to my instance of hoarder on a Tailscale IP and a non standard port, it works on a webpage but doesn't seem to work in a browser extension:

Web page:

image

Browser extension:

image
image

[Feature Request] Automatic tags based on metadata + adjusting image tag extraction prompt

It would be neat if it was possible to define rules that add tags based on metadata.

For example:
https://www.bukowskis.com/sv/lots/1540548-matbord-bento-hem-2000-tal
I'd want the tag furniture added because the URL contains bukowskis.com.

https://www.nordiskagalleriet.se/nemo-lighting/escargot?variant=10213172
I'd want the tag mid-century-modern added because the body contains the text Le Corbusier.

Building on-top that, it would be nice if it was possible to trigger changes in the image tag extraction prompt based on existing tags.

So if the tag furniture is present from the text extraction pass (or through metadata rules) the image prompt would append suggestions focusing on describing the item vs more generic tags.

Keyboard shortcut for saving new bookmark is not cross platfrom

If I paste a bookmark into the Upper Left box the button on the bottom changes to "Press ⌘+Enter to Save". This is on a Windows machine. I do not have a command key, and no combination of Ctrl or Alt seems to work. I can still press the button with the mouse.

Inference Failed with Ollama

I added ollama support per instructions. Some of the requests seemed to work fine but I found the following in the logs:
2024-03-27T14:01:40.649Z error: [inference][17] inference job failed: Error: Expected a completed response. 2024-03-27T14:06:40.997Z error: [inference][18] inference job failed: TypeError: fetch failed

At least one of my bookmarks does not have any tags added to it.

I'm running this on a docker container on linux.

Feature Request: Share list with other user

As described it would be awesome if we could share notes with other users on the server, for example share a list with a user and put the notes meant to be shared into said list.

This is a feature missing in usememos/memos and i think it would be great if you could implement it into Hoarder.

Disable new users sign-up

Hi Mohamed,

This is indeed a very interesting app!

I am suggesting a feature: disable new users sign-up

Usecase

After the app is deployed and exposed to internet, it is possible to sign-up. However, it would be nice to prevent other people from signing up to one's instance.

Thanks for your work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.