Giter Club home page Giter Club logo

lpms's Introduction

Build status

LPMS - Livepeer Media Server

LPMS is a media server that can run independently, or on top of the Livepeer network. It allows you to manipulate / broadcast a live video stream. Currently, LPMS supports RTMP as input format and RTMP/HLS as output formats.

LPMS can be integrated into another service, or run as a standalone service. To try LPMS as a standalone service, simply get the package:

go get -d github.com/livepeer/lpms/cmd/example

Go to the lpms root directory at $GOPATH/src/github.com/livepeer/lpms. If needed, install the required dependencies; see the Requirements section below. Then build the sample app and run it:

go build cmd/example/main.go
./example

Requirements

LPMS requires libavcodec (ffmpeg) and friends. See install_ffmpeg.sh . Running this script will install everything in ~/compiled. In order to build LPMS, the dependent libraries need to be discoverable by pkg-config and golang. If you installed everything with install_ffmpeg.sh , then run export PKG_CONFIG_PATH=~/compiled/lib/pkgconfig:$PKG_CONFIG_PATH so the deps are picked up.

Running golang unit tests (test.sh) requires the ffmpeg and ffprobe executables in addition to the libraries. However, none of these are run-time requirements; the executables are not used outside of testing, and the libraries are statically linked by default. Note that dynamic linking may substantially speed up rebuilds if doing heavy development.

Testing out LPMS

The test LPMS server exposes a few different endpoints:

  1. rtmp://localhost:1935/stream/test for uploading/viewing RTMP video stream.
  2. http://localhost:7935/stream/test_hls.m3u8 for consuming the HLS video stream.

Do the following steps to view a live stream video:

  1. Start LPMS by running go run cmd/example/main.go

  2. Upload an RTMP video stream to rtmp://localhost:1935/stream/test. We recommend using ffmpeg or OBS.

For ffmpeg on osx, run: ffmpeg -f avfoundation -framerate 30 -pixel_format uyvy422 -i "0:0" -c:v libx264 -tune zerolatency -b:v 900k -x264-params keyint=60:min-keyint=60 -c:a aac -ac 2 -ar 44100 -f flv rtmp://localhost:1935/stream/test

For OBS, fill in Settings->Stream->URL to be rtmp://localhost:1935

  1. If you have successfully uploaded the stream, you should see something like this in the LPMS output
I0324 09:44:14.639405   80673 listener.go:28] RTMP server got upstream
I0324 09:44:14.639429   80673 listener.go:42] Got RTMP Stream: test
  1. Now you have a RTMP video stream running, we can view it from the server. Simply run ffplay http://localhost:7935/stream/test.m3u8, you should see the hls video playback.

Integrating LPMS

LPMS exposes a few different methods for customization. As an example, take a look at cmd/main.go.

To create a new LPMS server:

// Specify ports you want the server to run on, and the working directory for
// temporary files. See `core/lpms.go` for a full list of LPMSOpts
opts := lpms.LPMSOpts {
    RtmpAddr: "127.0.0.1:1935",
    HttpAddr: "127.0.0.1:7935",
    WorkDir:  "/tmp"
}
lpms := lpms.New(&opts)

To handle RTMP publish:

lpms.HandleRTMPPublish(
	//getStreamID
	func(url *url.URL) (strmID string) {
		return getStreamIDFromPath(reqPath)
	},
	//getStream
	func(url *url.URL, rtmpStrm stream.RTMPVideoStream) (err error) {
		return nil
	},
	//finishStream
	func(url *url.URL, rtmpStrm stream.RTMPVideoStream) (err error) {
		return nil
	})

To handle RTMP playback:

lpms.HandleRTMPPlay(
	//getStream
	func(ctx context.Context, reqPath string, dst av.MuxCloser) error {
		glog.Infof("Got req: ", reqPath)
		streamID := getStreamIDFromPath(reqPath)
		src := streamDB.db[streamID]
		if src != nil {
			src.ReadRTMPFromStream(ctx, dst)
		} else {
			glog.Error("Cannot find stream for ", streamID)
			return stream.ErrNotFound
		}
		return nil
	})

To handle HLS playback:

lpms.HandleHLSPlay(
	//getHLSBuffer
	func(reqPath string) (*stream.HLSBuffer, error) {
		streamID := getHLSStreamIDFromPath(reqPath)
		buffer := bufferDB.db[streamID]
		s := streamDB.db[streamID]

		if s == nil {
			return nil, stream.ErrNotFound
		}

		if buffer == nil {
			//Create the buffer and start copying the stream into the buffer
			buffer = stream.NewHLSBuffer()
			bufferDB.db[streamID] = buffer

            //Subscribe to the stream
			sub := stream.NewStreamSubscriber(s)
			go sub.StartHLSWorker(context.Background())
			err := sub.SubscribeHLS(streamID, buffer)
			if err != nil {
				return nil, stream.ErrStreamSubscriber
			}
		}

		return buffer, nil
	})

GPU Support

Processing on Nvidia GPUs is supported. To enable this capability, FFmpeg needs to be built with GPU support. See the FFmpeg guidelines on this.

To execute the nvidia tests within the ffmpeg directory, run this command:

go test --tags=nvidia -run Nvidia

To run the tests on a particular GPU, use the GPU_DEVICE environment variable:

# Runs on GPU number 3
GPU_DEVICE=3 go test --tags=nvidia -run Nvidia

Aside from the tests themselves, there is a sample program that can be used as a reference to the LPMS GPU transcoding API. The sample program can select GPU or software processing via CLI flags. Run the sample program via:

# software processing
go run cmd/transcoding/transcoding.go transcoder/test.ts P144p30fps16x9,P240p30fps16x9 sw

# nvidia processing, GPU number 2
go run cmd/transcoding/transcoding.go transcoder/test.ts P144p30fps16x9,P240p30fps16x9 nv 2

Testing GPU transcoding with failed segments from Livepeer production environment

To test transcoding of segments failed on production in Nvidia environment:

  1. Install Livepeer from sources by following the installation guide
  2. Install Google Cloud SDK
  3. Make sure you have access to the bucket with the segments
  4. Download the segments:
    gsutil cp -r gs://livepeer-production-failed-transcodes /home/livepeer-production-failed-transcodes
  5. Run the test
    cd transcoder
    FAILCASE_PATH="/home/livepeer-production-failed-transcodes" go test --tags=nvidia -timeout 6h -run TestNvidia_CheckFailCase
  6. After the test has finished, it will display transcoding stats. Per-file results are logged to results.csv in the same directory

Contribute

Thank you for your interest in contributing to LPMS!

To get started:

lpms's People

Contributors

abab1l avatar alexkordic avatar cyberj0g avatar darkdarkdragon avatar dependabot[bot] avatar dob avatar ericxtang avatar hjpotter92 avatar iameli avatar j0sh avatar jailuthra avatar jameswanglf avatar leszko avatar mikeindiaalpha avatar mjh1 avatar mk-livepeer avatar oscar-davids avatar ranjeetkaur17 avatar thomshutt avatar yondonfu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lpms's Issues

HTML player

HMTL players for HLS and RTMP.

We can use video.js for RTMP, and hls.js for HLS.

Auto-create ABR list based on incoming video parameters

Is your feature request related to a problem? Please describe.
There is no sensible ABR default now - the developer has to worry about setting ABR for each video when doing transcoding. It can get potentially complex if the node has multiple input videos at the same time.

Describe the solution you'd like
Simple logic to automatically create ABR list based on incoming video parameters. We can add a method in ffmpeg_segment_transcoder.go called autoDetectABRProfiles that takes a HLSSegment and returns a list of VideoProfile.

The logic:

  • Only support 16x9 and 4x3, standard resolutions.
  • Based on input resolution, create an ABR list that includes everything in the standard resolutions list below the input resolution.
  • Try to guess FPS, and select the same FPS for the ABS list.

Describe alternatives you've considered
We could dynamically decide on the ABR list and accept non-standard formats. But that requires for us to move away from the enumerated VideoProfile concept and create dynamic VideoProfiles. We'd have to do some fuzzy math to "snap into" standard resolutions.

Additional context
This feature depends on #83

In go-livepeer, we can create a -autoABR flag, default to true. If -autoABR is set to false, rely on the user to set the transcode profiles. Otherwise, try to automatically set the ABR list.

Standard ABR Lists
16x9

  • P1080p60fps16x9
  • P1080p30fps16x9
  • P720p60fps16x9
  • P720p30fps16x9
  • P576p60fps16x9
  • P576p30fps16x9
  • P360p60fps16x9
  • P360p30fps16x9
  • P288p60fps16x9
  • P288p30fps16x9
  • P144p60fps16x9
  • P144p30fps16x9

4x3

  • P960p60fps4x3
  • P960p30fps4x3
  • P480p60fps4x3
  • P480p30fps4x3
  • P240p60fps4x3
  • P240p30fps4x3
  • P120p60fps4x3
  • P120p30fps4x3

Retry segmenter in case of disconnect

In case the segmenter terminates unexpectedly, we should attempt to re-create the segmenter If the source stream is still going.

We should preserve segment sequence numbers here. Note that the end of the source stream can be signaled by rs.EOF within core.SegmentRTMPToHLS.

lpms/core/lpms.go

Lines 131 to 145 in 5814b66

func (l *LPMS) SegmentRTMPToHLS(ctx context.Context, rs stream.RTMPVideoStream, hs stream.HLSVideoStream, segOptions segmenter.SegmenterOptions) error {
// set localhost if necessary. Check more problematic addrs? [::] ?
rtmpAddr := l.rtmpServer.Addr
if strings.HasPrefix(rtmpAddr, "0.0.0.0") {
rtmpAddr = "127.0.0.1" + rtmpAddr[len("0.0.0.0"):]
}
localRtmpUrl := "rtmp://" + rtmpAddr + "/stream/" + rs.GetStreamID()
glog.V(4).Infof("Segment RTMP Req: %v", localRtmpUrl)
//Invoke Segmenter
s := segmenter.NewFFMpegVideoSegmenter(l.workDir, hs.GetStreamID(), localRtmpUrl, segOptions)
c := make(chan error, 1)
ffmpegCtx, ffmpegCancel := context.WithCancel(context.Background())
go func() { c <- s.RTMPToHLS(ffmpegCtx, true) }()

Unit tests should be implemented to ensure this behavior is reliable, however the core package does not have any unit tests yet.

One approach that should be avoided is doing the retry within segmenter.RTMPToHLS since, despite the name, it is transport agnostic, and we don't want to box the API into one type of stream or another.

If you need help join our Discord chat and post your question in the #dev channel: https://discord.gg/7wRSUGX

FFmpeg Segmenter - based transcoder

SRS gave us a good start, but we need a more flexible strategy for transcoding. FFmpeg supports video segments, and we can transcode these segments individually after storing them.

To do this we need to:

  • Create a Video Segmenter that calls out to FFmpeg.
  • Following the ExternalTranscoder, create a Segment Transcoder.

ffmpeg.go:32] RTMP2HLS Transmux Return : File name too long

Hey livepeer team,
I ran into this error while trying to broadcast to livepeer. Any Advice ??

cc @ericxtang 😄

go-livepeer version: 0.1.14-unstable
livepeer command: ./livepeer -bootnode -v 4 -offchain
ffmpeg version: 3.1-static from static-ffmpeg repo.

ffmpeg command ffmpeg -i "Heat.1995.mp4" -framerate 30 -vcodec libx264 -tune zerolatency -b 1000k -acodec aac -ac 1 -b:a 96k -strict -2 -f flv "rtmp://localhost:1935/movie2"

OS: Ubuntu 16.04 64bit

output:

Setting up bootnode
I0325 21:23:06.418290   32565 basic_network.go:457] 

Setting up protocol: /livepeer_video/0.0.1
I0325 21:23:06.418321   32565 livepeer.go:191] ***Livepeer is in off-chain mode***
I0325 21:23:06.418971   32565 lpms.go:59] HTTP Server listening on :8935
I0325 21:23:06.419046   32565 lpms.go:55] LPMS Server listening on :1935
I0325 21:23:06.602802   32565 basic_notifiee.go:62] Notifiee - ClosedStream: 1220e272e0beb8c0a247b74d3411755ccbf0091b71c133d59be6525c8bd25c7e2b05 - 122019c1a1f0d9fa2296dccb972e7478c5163415cd55722dcf0123553f397c45df7e
I0325 21:23:14.347583   32565 listener.go:32] RTMP server got upstream: rtmp://localhost:1935/movie2
I0325 21:23:14.424086   32565 mediaserver.go:166] Cannot automatically detect the video profile - setting it to {P720p30fps16x9 4000k 30 1280x720 16:9}
I0325 21:23:14.424194   32565 mediaserver.go:267] 

ManifestID: 1220e272e0beb8c0a247b74d3411755ccbf0091b71c133d59be6525c8bd25c7e2b05456832baaa3de8f0c8de6e30c3f0427ee3471eae2e0477c30d8477b3b0ed5387

I0325 21:23:14.424226   32565 mediaserver.go:268] 

hlsStrmID: 1220e272e0beb8c0a247b74d3411755ccbf0091b71c133d59be6525c8bd25c7e2b05456832baaa3de8f0c8de6e30c3f0427ee3471eae2e0477c30d8477b3b0ed5387P720p30fps16x9

I0325 21:23:14.424295   32565 lpms.go:99] Segment RTMP Req: rtmp://localhost:1935/stream/1220e272e0beb8c0a247b74d3411755ccbf0091b71c133d59be6525c8bd25c7e2b059491545e46f4a8faa2948850ba584757bd5a67f4c1e7dd15d084606ed46068f1RTMP
I0325 21:23:14.544019   32565 player.go:57] LPMS got RTMP request @ rtmp://localhost:1935/stream/1220e272e0beb8c0a247b74d3411755ccbf0091b71c133d59be6525c8bd25c7e2b059491545e46f4a8faa2948850ba584757bd5a67f4c1e7dd15d084606ed46068f1RTMP
Error writing header
I0325 21:23:14.793332   32565 ffmpeg.go:32] RTMP2HLS Transmux Return : File name too long
I0325 21:23:14.793363   32565 video_segmenter.go:239] Cleaning up video segments.....
E0325 21:23:14.793424   32565 lpms.go:138] Error segmenting stream: File name too long
I0325 21:23:14.810225   32565 basic_rtmp_videostream.go:42] RTMP stream got error: write tcp 127.0.0.1:1935->127.0.0.1:55292: write: connection reset by peer
I0325 21:23:50.213398   32565 livepeer.go:342] Exiting Livepeer: interrupt

Video Buffer Eviction

Right now the video buffer just grows indefinitely. We need to have a eviction policy for both RTMP and HLS.

Segmenter error: "Waiting to load duration from playlist"

Reported by @chrishobcroft. This happened after 8 hrs of streaming -

So, results from last night:
For musicalwomenofberlin.com / * https://media.livepeer.org/channels/0xee5447FA534b3CF77600559C104815ba9dB73554

  • the localhost node was broadcasting "darkness" when I came in this morning.
  • OBS was still streaming into the node after 16 hrs
  • 6.7Gb of "untidied" segments in .tmp
  • It seems something failed after 8hrs 42minutes (log snippets below, full log will be attached), and I stopped the stream after 16hrs
E0315 02:50:32.554214   16367 basic_network.go:478] Got error decoding msg from 122019c1a1f0d9fa2296dccb972e7478c5163415cd55722dcf0123553f397c45df7e: EOF (*errors.errorString).
E0315 02:50:32.554697   16367 basic_network.go:463] Error handling stream: EOF
I0315 02:50:32.554798   16367 basic_notifiee.go:62] Notifiee - ClosedStream: 12206f03715c865174e7d426d2639b3dc9569456a6f950b016d8a6da67cc9687d8c0 - 122019c1a1f0d9fa2296dccb972e7478c5163415cd55722dcf0123553f397c45df7e
I0315 02:50:36.319651   16367 basic_notifiee.go:62] Notifiee - ClosedStream: 12206f03715c865174e7d426d2639b3dc9569456a6f950b016d8a6da67cc9687d8c0 - 122019c1a1f0d9fa2296dccb972e7478c5163415cd55722dcf0123553f397c45df7e
I0315 02:50:36.319864   16367 basic_notifiee.go:42] Notifiee - Disconnected. Local: 12206f03715c865174e7d426d2639b3dc9569456a6f950b016d8a6da67cc9687d8c0 - Remote: 122019c1a1f0d9fa2296dccb972e7478c5163415cd55722dcf0123553f397c45df7e
I0315 02:55:21.353083   16367 video_segmenter.go:116] Waiting to load duration from playlist
I0315 10:22:18.604450   16367 basic_broadcaster.go:84] broadcast worker done
I0315 10:22:18.609961   16367 video_segmenter.go:239] Cleaning up video segments.....

GPU Acceleration

The transcoder should be able to take advantage of GPU acceleration when it's available.

Need to make sure adding GPU acceleration won't change the output segments.

Transcoder Design

Two primary options when it comes to transcoder design.

Single threaded, straight through: each profile is encoded serially. Closely matches the current behavior of the transcoder. Note that FFmpeg currently doesn't go much over 100% CPU when transcoding; in many cases this actually leads the transcode job running slower than real-time. Broadcast latency suffers as a result.

This is the quickest to implement, but does not offer any parallelization potential. It also should not be any worse than what we have now.

Split the processing into stages.

  1. Demuxing
  2. Decoding
  3. Rescaling (video) or Resampling (audio)
  4. Encoding
  5. Muxing

The benefits of splitting are twofold.

  1. We can calculate the optimal code-path that a given profile should take. For example, if the input and output have the same codecs and resolution, we can simply transmux. If only the codec differs, we can skip rescaling. Different output formats that share the same encoding profile (eg, mpegts and mp4) can re-use the same encoded packets [1].

  2. Allows for increased concurrency: each component can run independently, including having a thread for each rescaler, encoder and/or muxer.

There are three ways we can implement this splitting:

  1. Entirely in C. Simplest approach architecturally, although it would likely require additional scaffolding to handle the bookkeping and interaction between each component (eg, a thread-safe queue).

  2. Bind each stage individually to Cgo, with the bookkeeping done in go-land. We achieve concurrency via goroutines. This approach seems the neatest in principle, but there is concern about hitting GOMAXPROCS and creating contention with the scheduler. In general, 'long running' Cgo routines are supposed to be detached to avoid counting against GOMAXPROCS [2][3], but the granularity and frequency of our Cgo calls might work against us [4]. Note that each transcoding profile could have up to 5 Cgo entry points (rescaler, resampler, video-encoder, audio-encoder, muxer), so our current 3-profile output could have 17 Cgo entry points (demuxer, decoder, 3x5-profile). While we should still achieve some semblance of work interleaving, this approach might actually make things worse if it disrupts the scheduler too much.

  3. Rust to manage the bookkeeping and concurrency. Since Rust can expose a C-compatible FFI, this can be bound with Cgo as well as a single entry point. Drawback is this adds a rather large component to the build system that might not be justified.

[1] Technically we'd have to run a bitstream filter to convert between MP4's Annex B format and a transport stream, but that's a much lighter operation than a re-encode.

[2] Working off the assumption that we shouldn't be overriding the user's GOMAXPROCS for them eg, by wrapping the livepeer go-binary in a shell script.

[3] golang/go#8636 (comment)

[4] Not to mention that transitioning the Go-Cgo boundary is slower, although not sure how much that would affect us in practice.

Inspect HLS Video Segment

Is your feature request related to a problem? Please describe.
We don't know the parameters of an video segment.

Describe the solution you'd like
To implement a video inspector that returns parameters about a HLSSegment.

What we want to understand:

  • Video resolution
  • Video aspect ratio
  • Video FPS
  • Video Codec
  • Audio Codec

We want to implement a vidInspector/inspector.go with a single Inspect method that takes a HLSSegment and returns the above parameters. For inspirations, check out lpms_length. You should follow the example of ffmpeg.go, possibly adding functions to it (by adding to the native libav integration).

Describe alternatives you've considered
We can also implement an inspector for the RTMP stream, but doing the check for the HLS segment can be used on the broadcaster and the transcoder.

Additional context
We are planning to use this inspector to inspect incoming videos, and automatically make choices about the ABR list.

We assume the video parameters will remain the same throughout the entire video. In practice, we'll probably examine the first segment of the video after the RTMP video is segmented into HLS, or if the input is in HLS, simply inspect the first segment.

CORS header

Please add a CORS header so that a client can request the manifest ID

First segment is sometimes invalid

We aren't guaranteed to get a keyframe from joy4 upon initial connection, and the resulting segment sometimes makes the transcoder die. Drop the first few video packets if they aren't keyframes.

Additionally, if the GOP is long enough, then the stream may fail to open entirely, so we may need to retry. Related: #53

Adaptive Bitrate Streaming for livepeer.tv

Currently I am broadcasting in 1920x1080 for livepeer.tv

For my viewers to be able to watch, they must have

  • a fast enough internet connection to be able to receive the feed
  • a powerful enough device to show the content

I would like to be able to serve the content at different resolutions (and bit rates) so that viewers with less powerful internet connections and devices can still watch.

Webrtc support

LPMS should be able to connect to webrtc connections. Currently there isn't great library support in golang for webrtc. Our plan is to collaborate with the libp2p team on this.

HLS Playlist Generation

HLSBuffer should be able to generate its own playlist through the segments it knows about. This can be helpful during live streaming, when viewers start the stream in the middle, and you don't want to have playlists to have any past segments.

New Interface: VideoStream, HLSVideoStream and RTMPVideoStream

The VideoStream struct was written a while ago (when our architecture was still nascent), and it's overloaded with RTMP and HLS functionalities. Over the past few months, it's clear that we don't need to have both concepts live in the same struct. In fact, the overhead of having to maintain VideoStream and HLSBuffer as separate concepts is becoming more complex than necessary.

So I propose a change in our stream structs, where we would have:

type VideoStream interface {
	GetStreamID() string
	GetStreamFormat() VideoFormat
}
type HLSVideoStream interface {
	VideoStream
	GetMasterPlaylist() (*m3u8.MasterPlaylist, error)
	GetMediaPlaylist(strmID string) (*m3u8.MediaPlaylist, error)
	GetHLSSegment(strmID string, segName string) (*HLSSegment, error)
	AddMediaPlaylist(strmID string, variant *m3u8.Variant) error
	AddHLSSegment(strmID string, seg *HLSSegment) error
}

and

type RTMPVideoStream interface {
	VideoStream
	ReadRTMPFromStream(ctx context.Context, dst av.MuxCloser) error
	WriteRTMPToStream(ctx context.Context, src av.DemuxCloser) error
}

Here we separate out RTMP streams from HLS streams, but combine HLSStream and HLSBuffer to a single struct.

This may look like a big change, but I think a lot of code can be re-used and it'll make the rest of the code base much simpler. Now is a good opportunity for this small re-architecture because we need to add MasterPlaylist for ABS anyways, and doing that in the old architecture introduces even more unnecessary complexity.

Override avio_open, restore default log level

Right now the avformat avio_open callback leads to noisy log output with HLS. Suppress this, so we can restore the default log level, which may otherwise have useful information. Related: c9011b4

Also, binding avio_open/avio_close will be beneficial should we ever use the ffmpeg-generated M3U8 manifests rather than polling to generate our own. This will give us a notification when new segments or manifests are ready.

panic: runtime error

get error when run go run cmd/example/main.go

I1215 16:19:50.098904   37078 lpms.go:56] HTTP Server listening on :8000
I1215 16:19:50.099032   37078 lpms.go:52] LPMS Server listening on :1935
I1215 16:19:50.120190   37078 main.go:168] Got req: %!(EXTRA string=/stream/6427cb354ffbb854ee9f)
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x127ea40]

goroutine 22 [running]:
main.main.func7(0xc420122180, 0x132d98a, 0x1a, 0xc42004def8, 0x1)
    /Users/apple/workspace/lpms/cmd/example/main.go:170 +0xe0
github.com/livepeer/lpms/vidplayer.(*VidPlayer).rtmpServerHandlePlay.func1(0xc42005ea80)
    /Users/apple/workspace/go/src/github.com/livepeer/lpms/vidplayer/player.go:58 +0xcd
github.com/nareix/joy4/format/rtmp.(*Server).handleConn(0xc420074d80, 0xc42005ea80, 0x1, 0x1)
    /Users/apple/workspace/go/src/github.com/nareix/joy4/format/rtmp/rtmp.go:73 +0xc1
github.com/nareix/joy4/format/rtmp.(*Server).ListenAndServe.func1(0xc420074d80, 0xc42005ea80)
    /Users/apple/workspace/go/src/github.com/nareix/joy4/format/rtmp/rtmp.go:118 +0x39
created by github.com/nareix/joy4/format/rtmp.(*Server).ListenAndServe
    /Users/apple/workspace/go/src/github.com/nareix/joy4/format/rtmp/rtmp.go:117 +0x1b6
exit status 2

macOS 10.12.6
go 1.9.2

RTMP library upgrade

We currently use the joy4 RTMP library. It's an incompletely implementation of the RTMP standard, and we should move to a better lib. For example, if you use OBS on a mac, you can see LPMS works with x264 encoder, but breaks with Apple VT H264 encoder.

There is currently no good RTMP libraries in golang, so we'd probably have to do integrate something like nginx-rtmp

error when install lpms

when i run command "go get github.com/livepeer/lpms"
i get a errror

root@f06ed28f1cbf:/go# 
root@f06ed28f1cbf:/go# go get github.com/livepeer/lpms
package github.com/livepeer/lpms: no Go files in /go/src/github.com/livepeer/lpms
root@f06ed28f1cbf:/go# 

How do I solve this problem?
Is there any other way to get this program?

Adaptive Streaming Support

Currently our HLS format does not support adaptive streaming. We will have to implement that part of the spec.

Audio encoding parameters

We have several tunable parameters for video, but none for audio. We might want some, especially for downmixing. Eg, codec, sample rate, channels and channel layouts.

Handle video-only or audio-only streams in segmenter

Consider these cases:

  • Audio or video only streams
  • Streams that come in late; eg past analyzeduration
  • Invalid video, such as video that doesn't have keyframes. It may still be useful to write audio alone to the segment. Vice versa for audio.

Playlist Winsize

There is a setting in m3u8.MediaPlaylist called "winsize", which determines the number of segments displayed when generating the playlist. This is a better way of dealing with playlist size than manually removing segments - which is what we are doing now.

Check player.go.

Stream Key Support

This is a discussion issue to suggest the idea of adding support for stream keys to LPMS. The need for this comes up constantly as we discuss the various use cases for how people will use Livepeer - self hosting ingest servers on the open internet, that require authentication in order to actually push a stream.

Perhaps the API is something like

generateStreamKey(expiration_date)

revokeStreamKey(key)

And a flag on startup --authenticationRequired, which would indicate that to accept an RTMP stream a streamKey must be provided?

Perhaps the currently valid stream keys are stored in a config file, such that if you're running multiple nodes you can easily create a process which updates the valid stream keys across machines?

Clearly this deserves more design and discussion - but it seems like a valuable feature of any open source media server.

http api docs

As far as I know these do not exist. I know I can ask for /manifestId as that is stated in the general getting started, but it would be cool to know what else is exposed, and/or have a place where people could request endpoints to be added.

One thing I thought might be fun to build is a way to see how many streams the livepeer media server was streaming at any given time. This is pretty low priority, but curious if the http interface could tell me this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.