Audio

The <ef-audio> is a custom HTML element that extends the functionality of the standard HTML audio element. It's designed to work seamlessly with the <ef-timegroup> element.

Basic Usage

[intentionally left blank]

Attributes

assetId

DOM: read / write
HTML: asset-id
JSX: assetId
Type: string

The asset ID of the audio.

An asset ID is a unique identifier for a media file that has been uploaded to Editframe. This is documented in the Processing Files section.

src

DOM: read / write
HTML: src
JSX: src
Type: string

The source URL/path of the audio.

⚠️ This property is not yet supported in all circumstances. Providing an arbitrary URL or path will not work unless the server is configured to respond with a precise, not-yet-documented, format.

Demonstration projects with such a server are available via npm create @editframe@beta. These are not intended for production use. When complete, a specialized server should not be required, though there will be specific encoding/muxing limitations.

Instead upload media to Editframe and use the assetId property.

🚫 This property should only be used for development, testing, and previewing media to end-users in real-time. When submitting a render job, all media should be uploaded to Editframe and the assetId property should be used. Our render servers will render in parallel, and in order to operate efficiently we need to be able to load only the slice of media that is needed to render a small segment of the timeline.

startTimeMs

DOM: read
Type: number

The start time of the audio in milliseconds. This time is relative to the timeline describe by the root timegroup the audio is contained within.

endTimeMs

DOM: read
Type: number

The end time of the audio in milliseconds. This time is relative to the timeline describe by the root timegroup the audio is contained within.

durationMs

DOM: read
Type: number

The duration of the audio in milliseconds. This is equivalent to the difference between endTimeMs and startTimeMs.

currentTimesMs

DOM: read
Type: number

The current time of the audio in milliseconds. This time is the time within the timeline of the root timegroup the audio is contained within.

ownCurrentTimeMs

DOM: read
Type: number

The current time of the audio in milliseconds. This time is scoped directly to the audio element itself.

currentSourceTimeMs

DOM: read
Type: number

This property is the current time of the audio in milliseconds scoped to the source media. If no trimStart or sourceIn are set, this property is equivalent to ownCurrentTimeMs.

💡 If you want to associate data to a audio clip, but it's important that the data is not affected by trimming, use this property. For example, if you were tracking an object in the audio, you should use this time, so that tracking would hold true even if a trimStart value is set later.

trimStart

DOM: read / write
HTML: trimstart
JSX: trimStart
Type: timestring

A string representing time duration (e.g. "5s", "1.5s", "500ms")

A duration of time to trim off from the start of the audio.

This time is relative to the start of the audio.

Setting trimStart to 10s will result in the audio starting 10 seconds into the source media.

This property is intended to be used with the trimEnd property.

trimEnd

DOM: read / write
HTML: trimend
JSX: trimEnd
Type: timestring

A string representing time duration (e.g. "5s", "1.5s", "500ms")

A duration of time to trim off from the end of the audio.

This time is relative to the end of the audio.

Setting trimEnd to 10s will result in the audio ending 10 seconds into the source media.

This property is intended to be used with the trimStart.

sourceIn

DOM: read / write
HTML: sourcein
JSX: sourceIn
Type: timestring

A string representing time duration (e.g. "5s", "1.5s", "500ms")

A duration of time to trim off from the start of the audio.

This time is an absolute value in the timeline of the source media.

Setting sourceIn to 10s will result in the audio starting 10 seconds into the source media.

This property is intended to be used with the sourceOut property.

sourceOut

DOM: read / write
HTML: sourceout
JSX: sourceOut
Type: timestring

A string representing time duration (e.g. "5s", "1.5s", "500ms")

A duration of time to trim off from the end of the audio.

This time is an absolute value in the timeline of the source media.

Setting sourceOut to 10s will result in the audio ending 10 seconds into the source media.

This property is intended to be used with the sourceIn property.

parentTimegroup

DOM: read
Type: <ef-timegroup> | null

The closest timegroup element that contains the audio.

rootTimegroup

DOM: read
Type: <ef-timegroup> | null

The outermost timegroup element that contains the audio.

fftSize

DOM: read / write
HTML: fft-size
JSX: fftSize
Type: number
Default: 512

The size of the FFT (Fast Fourier Transform) to use for audio analysis.

This property is used when associated the audio with a waveform or other audio analysis.

The value MUST be a power of 2. Higher values will result in more granular waveforms. Though this is not always more aesthetically pleasing.

fftDecay

DOM: read / write
HTML: fft-decay
JSX: fftDecay
Type: number
Default: 8

The decay of the FFT (Fast Fourier Transform) to use for audio analysis.

To create a smoother waveform animation, we average together several frames worth of audio data. This property controls the number of frames to average.

Higher values will result in a smoother animation. Extremely high values will increase computational cost.

fftGain

DOM: read / write
HTML: fft-gain
JSX: fftGain
Type: number
Default: 3

The gain of the FFT (Fast Fourier Transform) to use for audio analysis.

This property is used when associated the audio with a waveform or other audio analysis.

Higher values will result in a more pronounced waveform.

interpolateFrequencies

HTML: interpolate-frequencies
JSX: interpolateFrequencies
Type: boolean
Default: false

Some input audio may have been fed through a highpass, resulting in waveform visualizations that appear to be missing data.

For example, if audio was recorded through a telephone network, which typically cuts out frequencies above ~4khz, the waveform will appear to be missing data above this frequency.

Setting this property will ignore zero values on the high end of the frequency spectrum, and interpolate the data to fill in the missing values.

This will result in a more aesthetically pleasing waveform, but it may also result in a loss of precision.