Edit your content into a single file (max 1080p 30fps) and use ffmpeg to extract the audio and video into two files:
ffmpeg -i Desktop/perkitchen.MP4 -f s16le -acodec pcm_s16le -ar 48000 -ac 2 perkitchen.raw
ffmpeg -i Desktop/perkitchen.MP4 -an -f h264 -pix_fmt yuv420p perkitchen.h264
NOTE: for many media files, the audio and video playback times do not exactly match. If the loops start to drift, tinyloop will perform an audio micro-edit to nudge them back into sync. For most content this is imperceptible. In my experience a typical file will loop several times before requiring a microedit. If you carefully calculate the audio file's length to exactly match the number of frames, it might get nudged only once every 100 or 1000 loops, or never.
Edit your content into a single file (max 1080p 30fps) and use ffmpeg to extract the video:
ffmpeg -i Desktop/perkitchen.MP4 -an -f h264 -pix_fmt yuv420p perkitchen.h264
Put your files into a folder named "playlist". Supports any format omxplayer does (most of them.)
Drag-and-drop it. The first file found will loop. Supports any format omxplayer does (most of them.)
Load an executable video.sh file in addition to any media files it needs. See the preloaded example. This is also an easy way to customize the builtin scripts, e.g. to send audio to headphone out or enable subtitles.
At time of creation, I had to choose between several images depending on what type of playback I desired, and I couldn't find a working solution for gapless looping with sound.
rename video__login_prompt.sh to video.sh, power up, ctrl-alt-f2, log in with pi/raspberry
sudo mount -o remount,rw /boot
sudo mount -o remount,rw /
Take the number of video_frames printed by ffmpeg when extracting video. Keep the video at 30FPS [or 24FPS] and make sure the audio is 1470 [or 2000] audio frames per video frame (file size should be 5880 bytes [or 8000 bytes] * video_frames). If your video is 29.97, either re-encode it to 30, or perform the following math:
audio_frames = video_frames * 48000 / 29.97
Throw away anything past the decimal point
Target_file_size = 4 * audio_frames
The result will be within 1 audio frame of target, and will drift so slowly that there should be one micro-edit every thousand or so loops.
I don't like it either. If you already have yuv420p .h264, it should be possible to demux it, e.g. yamb or various other tools. It should also be possible to invoke ffmpeg with -c:v copy to avoid re-encoding, although it didn't work for me. My workflow is to render to apple prores and leave the encoding for ffmpeg.
The default h264 quality works pretty well, but if you get blocky artifacts or other problems in your output file, explore the filters and options. A good place to start is adding "-crf 17".
If you don't connect an HDMI cable but do connect a composite cable, you should get video on the composite output. This has been tested on an rPi 1a. In extreme cases when you don't get a signal on your composite (or HDMI) display, advanced video options can be set by editing config.txt in the boot/ partition.
No support promised, but provided on best effort basis.
Support accepted; although everything builds on work done by the community, programming the seamless audio-video looping and setting up the image took quite a few hours.