Giter Club home page Giter Club logo

panopto-download's Introduction

Background & Overview

Students deserve access to course materials, even after completing a course. At Carnegie Mellon, many lecture videos are recorded and made available online on Panopto. There is no easy user-interface to download course videos, but there is a feature where folders of recorded videos generate RSS feeds.

This script parses Panopto RSS feeds and extracts video URLs for batch downloading. In general, this project is designed to make it as easy as possible to download Panopto videos and lectures.

Quick Start

  1. To use the script, clone the repository or download panopto-video-urls.py, and go into the panopto-download directory.

    git clone https://github.com/jstrieb/panopto-download.git && cd panopto-download
    
  2. Make sure Python 3 is running (check by running python --version or locating a python3 binary). Also make sure the requests library is installed -- it may have been installed by another package. To install it (particularly if seeing a ModuleNotFoundError), run the following.

    python3 -m pip install -r requirements.txt
    

    Additionally, if on a system with limited permissions, instead run the following command. This only installs the dependencies for the local user, rather than system-wide. In particular, if running the script on the Andrew servers, students can only install locally since their accounts lack sudo permissions.

    python3 -m pip install --user -r requirements.txt
    
  3. Test that the command works. When run, there should be output (somewhat) like the following.

    $ python3 panopto-video-urls.py
    usage: panopto-video-urls.py [-h] [-o OUTPUT_FILE] [-x] podcast_url
    panopto-video-urls.py: error: the following arguments are required: podcast_url
    
  4. Get a Panopto RSS URL. For more information on how to do this, see the next section.

  5. Now, use the script to generate a list of video URLs to download. This can be saved into a file using the -o option, or it can be piped directly into xargs if on a system where it is installed. The latter is my preferred option. To download all videos from an RSS link, I do the following.

    python3 panopto-video-urls.py -x "http://<some link>" | xargs -L 2 -P 8 curl -q -L
    

    The -L 2 option to xargs specifies that it should read two consecutive lines as arguments to each command that is run (run with -L 3 if using cookies), and the -P 8 option specifies how many processes to run at once. Using 0 for the number of processes denotes using as many as possible, but 8 has greater cross-platform compatibility, and is thus used instead.

    The -L option to curl specifies that curl should follow redirects until it gets to the video, and is different from the -L option passed to xargs. The -q option tells curl not to print a progress bar for each process.

Getting Panopto RSS Feed URLs

Panopto automatically generates RSS feeds for folders of videos. The links to these feeds are what is used as input for the script.

  1. To get a course or folder's RSS link, first navigate to its folder. One way to do this is by using the "Shared With Me" link on the left side of the page.

    Step 1

  2. When looking at a list of videos, click the name of the folder ("Location") on the right to navigate to that folder.

    Step 2

  3. Finally, in the top-right corner, there is an RSS button. Click that, and right click on "Subscribe to RSS" to copy the RSS link.

    Step 3

panopto-download's People

Contributors

dependabot[bot] avatar jstrieb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

panopto-download's Issues

Video URLs from textfile

Hi,

unfortunately I can't get a RSS link for any of my lectures (TU Kaiserslautern)
Can You upload a sample one?

We usually get direct links to each stream/video.
My idea is to just copy paste these links into a textfile, instead of writing a RSS file by hand/script and pass this to the python link extractor to extract the .mp4 links. Will this work?

My links look like this.
https://vcm.uni-kl.de/Panopto/Pages/Viewer.aspx?id=---some-UID---

If so I'll add a parameter + method for passing such textfile and open a pull request

PS: Big thanks for creating this project. I was just about to write a script for this purpose and luckily found yours ๐Ÿฅ‡

Panopto deprecated RSS feeds

I recently moved to a unit that uses Panopto. I was shocked to learn that no bulk download exists for users (or admins). Was thrilled to find this tool that leveraged RSS feeds.
Then I discovered that Panopto got rid of RSS feeds in May 2022.

Sad. I know some people who love how user friendly Panopto seems compared to Kaltura, but not having a bulk download feature native to the tool seems insane, let alone blocking off the workaround for "security" reasons.

Getting an error

Hi! I tried running this, but get the following error:

xargs: max. processes must be >0
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>
BrokenPipeError: [Errno 32] Broken pipe

I'm using the following command:

python3 panopto-video-urls.py -x "http://<some link>" | xargs -L 2 -P 0 wget -O

but with my link. Any help would be appreciated. Thanks!

Does using more processes actually download faster?

From my experience, the bandwidth from panopto's servers is fixed, and hammering the server with many video downloads is not actually faster (the total amount of data being downloaded at once is the same). Also, my shell doesn't display wget's progress bar nicely.

quick and dirty url extraction

If all the python script does is parse the RSS XML to find mp4 links, then something like grep mp4 Podcast.xml is enough to get some links. Maybe extract URLs with xq

Direct link - xml.etree.ElementTree.ParseError: not well-formed (invalid token)

My institution does not support RSS feeds, so I tried to use the direct link to a video and got the following error:

$ python3 panopto-video-urls.py https://...panopto.eu/Panopto/Pages/Viewer.aspx?id=dcd84143-6e33-4922-8b4a-ac8301593910
Traceback (most recent call last):
  File "panopto-video-urls.py", line 87, in <module>
    main()
  File "panopto-video-urls.py", line 67, in main
    video_urls, video_titles = parse_xml(podcast_url)
  File "panopto-video-urls.py", line 42, in parse_xml
    root = ET.fromstring(data)
  File "/usr/lib/python3.8/xml/etree/ElementTree.py", line 1320, in XML
    parser.feed(text)
xml.etree.ElementTree.ParseError: not well-formed (invalid token): line 69, column 53

Page source:
allowVideoPodcastFeeds: false,
allowAudioPodcastFeeds: false,

Are direct links supported?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.