summaryrefslogtreecommitdiff
path: root/core/multimedia/opieplayer/libmpeg3/docs/index.html
Side-by-side diff
Diffstat (limited to 'core/multimedia/opieplayer/libmpeg3/docs/index.html') (more/less context) (ignore whitespace changes)
-rw-r--r--core/multimedia/opieplayer/libmpeg3/docs/index.html306
1 files changed, 306 insertions, 0 deletions
diff --git a/core/multimedia/opieplayer/libmpeg3/docs/index.html b/core/multimedia/opieplayer/libmpeg3/docs/index.html
new file mode 100644
index 0000000..2d79978
--- a/dev/null
+++ b/core/multimedia/opieplayer/libmpeg3/docs/index.html
@@ -0,0 +1,306 @@
+<TITLE>LibMPEG3</TITLE>
+
+<CENTER>
+<FONT FACE=HELVETICA SIZE=+4><B>Using LibMPEG3 to make your own MPEG applications</B></FONT><P>
+
+<TABLE>
+<TR>
+<TD>
+<CODE>
+Author: Adam Williams broadcast@earthling.net<BR>
+Homepage: heroinewarrior.com<BR>
+</CODE>
+</TD>
+</TR>
+</TABLE>
+</CENTER>
+
+<P>
+
+
+LibMPEG3 decodes the many many derivatives of MPEG standards into
+uncompressed data suitable for editing and playback.<P>
+
+libmpeg3 currently decodes:<P>
+
+<BLOCKQUOTE>MPEG-2 video<BR>
+MPEG-1 video<BR>
+mp3 audio<BR>
+mp2 audio<BR>
+ac3 audio<BR>
+MPEG-2 system streams<BR>
+MPEG-1 system streams
+</BLOCKQUOTE><P>
+
+The video output can be in many different color models and frame
+sizes. The audio output can be in twos compliment or floating
+point.<P>
+
+
+
+
+
+
+
+
+
+<FONT FACE=HELVETICA SIZE=+4><B>STEP 1: Verifying file compatibility</B></FONT><P>
+
+Programs using libmpeg3 must <CODE>#include "libmpeg3.h"</CODE>.<P>
+
+Call <CODE>mpeg3_check_sig</CODE> to verify if the file can be read by
+libmpeg3. This returns a 1 if it is compatible and 0 if it isn't.<P>
+
+
+
+
+
+
+
+
+
+
+
+<FONT FACE=HELVETICA SIZE=+4><B>STEP 2: Open the file</B></FONT><P>
+
+You need an <CODE>mpeg3_t*</CODE> file descriptor:<P>
+<CODE>
+mpeg3_t* file;
+</CODE>
+<P>
+
+Then you need to open the file:<P>
+
+<CODE>file = mpeg3_open(char *path);</CODE><P>
+
+<CODE>mpeg3_open</CODE> returns a NULL if the file couldn't be opened
+for some reason. Be sure to check this. Everything you do with
+libmpeg3 requires passing the <CODE>file</CODE> pointer.<P>
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+<FONT FACE=HELVETICA SIZE=+4><B>STEP 3: How many CPUs do you want to use?</B></FONT><P>
+
+Call <CODE>mpeg3_set_cpus(mpeg3_t *file, int cpus)</CODE> to set how
+many CPUs should be devoted to video decompression. LibMPEG3 can use
+any number. If you don't call this right after opening the file, the
+CPU number defaults to 1.<P>
+
+
+
+
+
+
+
+<FONT FACE=HELVETICA SIZE=+4><B>STEP 4: Get some information about the file.</B></FONT><P>
+
+There are a number of queries for the audio components of the stream:<P>
+
+<CODE><PRE>
+int mpeg3_has_audio(mpeg3_t *file);
+int mpeg3_total_astreams(mpeg3_t *file); // Number of multiplexed audio streams
+int mpeg3_audio_channels(mpeg3_t *file, int stream);
+int mpeg3_sample_rate(mpeg3_t *file, int stream);
+long mpeg3_audio_samples(mpeg3_t *file, int stream); // Total length
+</PRE></CODE>
+
+The audio is presented as a number of <B>streams</B> starting at 0 and
+including <CODE>mpeg3_total_astreams</CODE> - 1. Each stream contains a
+certain number of <B>channels</B> starting at 0 and including
+<CODE>mpeg3_audio_channels</CODE> - 1.
+
+The methodology is first determine if the file has audio, then get
+the number of streams in the file, then for each stream get the number
+of channels, sample rate, and length.<P>
+
+There are also queries for the video components:<P>
+
+<CODE><PRE>
+int mpeg3_has_video(mpeg3_t *file);
+int mpeg3_total_vstreams(mpeg3_t *file); // Number of multiplexed video streams
+int mpeg3_video_width(mpeg3_t *file, int stream);
+int mpeg3_video_height(mpeg3_t *file, int stream);
+float mpeg3_frame_rate(mpeg3_t *file, int stream); // Frames/sec
+long mpeg3_video_frames(mpeg3_t *file, int stream); // Total length
+</PRE></CODE>
+
+The video behavior is the same as with audio, except that video has no
+subdivision under <B>streams</B>. Frame rate is a floating point
+number of frames per second.<P>
+
+
+
+
+
+
+
+<FONT FACE=HELVETICA SIZE=+4><B>STEP 5: Seeking to a point in the file</B></FONT><P>
+
+Each audio stream and each video stream has a position in the file
+independant of each other stream. A variety of methods are available
+for specifying the position of a stream: percentage, frame, sample.
+Which method you use depends on whether you're seeking audio or video
+and whether you're seeking all tracks to a percentage of the file.<P>
+
+The preferred seeking method if you're writing a player is:<P>
+
+<CODE><PRE>
+int mpeg3_seek_percentage(mpeg3_t *file, double percentage);
+double mpeg3_tell_percentage(mpeg3_t *file);
+</PRE></CODE>
+
+This seeks all tracks to a percentage of the file length. The
+percentage is from 0 to 1.<P>
+
+The alternative is absolute seeking. The audio seeking is handled
+by:<P>
+
+<CODE><PRE>
+int mpeg3_set_sample(mpeg3_t *file, long sample, int stream); // Seek
+long mpeg3_get_sample(mpeg3_t *file, int stream); // Tell current position
+</PRE></CODE>
+
+and the video seeking is handled by:<P>
+
+<CODE><PRE>
+int mpeg3_set_frame(mpeg3_t *file, long frame, int stream); // Seek
+long mpeg3_get_frame(mpeg3_t *file, int stream); // Tell current position
+</PRE></CODE>
+
+
+You can either perform percentage seeking or absolute seeking but not
+both on the same file handle. Once you perform either method, the file
+becomes configured for that method.<P>
+
+If you're in percentage seeking mode and you want the current time
+stamp in the file you can't use mpeg3_tell_percentage because you don't
+know how many seconds the total length is. The
+<CODE>mpeg3_audio_samples</CODE> and <CODE>mpeg3_video_frames</CODE>
+commands don't work in percentage seeking. Instead use
+
+<CODE><PRE>
+double mpeg3_get_time(mpeg3_t *file);
+</PRE></CODE>
+
+which gives you the last timecode read in seconds. The MPEG standard
+specifies timecodes being placed in the streams.<P>
+
+
+
+
+
+
+
+
+
+
+
+<FONT FACE=HELVETICA SIZE=+4><B>STEP 6: Read the data</B></FONT><P>
+
+To read <B>audio</B> data use:<P>
+
+<CODE><PRE>
+int mpeg3_read_audio(mpeg3_t *file,
+ float *output_f, // Pointer to pre-allocated buffer of floats
+ short *output_i, // Pointer to pre-allocated buffer if int16's
+ int channel, // Channel to decode
+ long samples, // Number of samples to decode
+ int stream); // Stream containing the channel
+</PRE></CODE>
+
+This decodes a buffer of sequential floats or int16's for a single
+channel, depending on which *output... parameter has a nonzero
+argument. To get a floating point buffer pass a pre-allocated buffer
+to <CODE>output_f</CODE> and NULL to <CODE>output_i</CODE>. To get an
+int16 buffer pass NULL to <CODE>output_f</CODE> and a pre-allocated
+buffer to <CODE>output_i</CODE>.<P>
+
+After reading an audio buffer, the current position in the one stream
+is advanced. How then, do you read more than one channel of audio
+data? Use
+
+<CODE><PRE>
+mpeg3_reread_audio(mpeg3_t *file,
+ float *output_f, /* Pointer to pre-allocated buffer of floats */
+ short *output_i, /* Pointer to pre-allocated buffer of int16's */
+ int channel, /* Channel to decode */
+ long samples, /* Number of samples to decode */
+ int stream);
+</PRE></CODE>
+
+to read each remaining channel after the first channel.<P>
+
+To read <B>video</B> data there are two methods. RGB frames or YUV
+frames. To get an RGB frame use:<BR>
+
+<CODE><PRE>
+int mpeg3_read_frame(mpeg3_t *file,
+ unsigned char **output_rows, // Array of pointers to the start of each output row
+ int in_x, // Location in input frame to take picture
+ int in_y,
+ int in_w,
+ int in_h,
+ int out_w, // Dimensions of output_rows
+ int out_h,
+ int color_model, // One of the color model #defines given above.
+ int stream);
+</PRE></CODE>
+
+The video decoding works like a camcorder taking copy of a movie
+screen. The decoder "sees" a region of the movie screen defined by
+<CODE>in_x, in_y, in_w, in_h</CODE> and transfers it to the frame
+buffer defined by <CODE>**output_rows</CODE>. The input values must be
+within the boundaries given by <CODE>mpeg3_video_width</CODE> and
+<CODE>mpeg3_video_height</CODE>. The size of the frame buffer is
+defined by <CODE>out_w, out_h</CODE>. Although the input dimensions
+are constrained, the frame buffer can be any size.<P>
+
+<CODE>color_model</CODE> defines which RGB color model the picture
+should be decoded to and the possible values are given in
+<B>libmpeg3.h</B>. The frame buffer pointed to by
+<CODE>output_rows</CODE> must have enough memory allocated to store the
+color model you select.<P>
+
+<B>You must allocate 4 extra bytes in the last output_row.</B> This is
+scratch area for the MMX routines.<P>
+
+<CODE>mpeg3_read_frame</CODE> advances the position in the one stream by 1 frame.<P>
+
+The alternative is YUV frames:<BR>
+
+<CODE><PRE>
+int mpeg3_read_yuvframe(mpeg3_t *file,
+ char *y_output,
+ char *u_output,
+ char *v_output,
+ int in_x,
+ int in_y,
+ int in_w,
+ int in_h,
+ int stream);
+</PRE></CODE>
+
+The behavior of in_x, in_y, in_w, in_h is identical to mpeg3_read_frame
+except here you have no control over the output frame size. <B>You
+must allocate in_w * in_h for the y_output, and in_w * in_h / 4 for the
+chroma outputs.</B><P>
+
+
+
+
+
+<FONT FACE=HELVETICA SIZE=+4><B>STEP 7: Close the file</B></FONT><P>
+
+Be sure to close the file with <CODE>mpeg3_close(mpeg3_t *file)</CODE>
+when you're done with it.