AudioMind is a Python-based solution designed to extract meaningful insights from audio files. By leveraging whisper and LLMs, the platform transcribes and summarizes audio content, making it easier to derive actionable information.
- OpenAI
- Whisper (Openai API) [DEFAULT]
- Whisper (On-Device)
- Create a journal entry from your voice note.
- Transcribe audio files to text.
- Summarize the transcribed text.
- Easy to integrate and use.
- Get Insights from any audio file, including podcasts , interviews, lectures, etc.
- Solve actual problems.
- Python 3.x
- pip
Use PIP Package (Recommended)
pip install audiomind
from audiomind import AudioMind
audiomind = AudioMind()
audiomind.process(file="examples/1.mp3")
-
Clone the Repository
git clone https://github.com/onlyoneaman/audiomind.git cd audiomind
-
Create a Virtual Environment
python3 -m venv .venv
Activate the virtual environment:
-
Unix or MacOS
source .venv/bin/activate
-
Windows
.\.venv\Scripts\activate
-
-
Install Dependencies
pip install -r requirements.txt
-
Environment Variables
Copy
.env.template
to.env
.cp .env.template .env
Open
.env
and provide your OpenAI API key:OPENAI_API_KEY=your_openai_api_key_here DREAMBOAT_API_KEY=your_dreamboat_api_key_here // optional
-
Run the Application
python3 -m audiomind
Place the audio files in the /exmaples
folder and run the audio_to_journal.py
script. The script will transcribe the audio and summarize it.
python3 -m audiomind --file examples/1.mp3
You can add some information about yourself in person.txt
file.
Audiomind will use this information too while creating the journal entry.
- Transcribe audio files to text.
- Summarize the transcribed text.
- Easy to integrate and use.
- Get Insights from any audio file, including podcasts , interviews, lectures, etc.
- Create a journal entry from your voice note.
- Improve the journal entry.
- Create a summary of a podcast episode.
- Create a summary of a lecture.
- Create a summary of a meeting.
Feel free to submit issues and enhancement requests.
MIT
Enjoy using AudioMind!