As of yesterday, all the music and sound effects for ACF are complete and in-game. Sound is another aspect that I previously knew very little about, and even after slogging through it for this, I'm fully aware that I've still only scratched the surface. But I feel like I still need to say something about it here for completion's sake. So here are the basics that got me through:
Relevant objects: Unity plays all sounds using the AudioSource component, and hears them through an AudioListener. You can have an unlimited number of the first, but only exactly one of the second. (This becomes an issue for multiplayer games, which I'll get to below.) AudioSources play AudioClips, which are your imported .mp3, .wav, or .ogg files. Unity supports some others too, but I wouldn't bother with them. I wouldn't even bother with .mp3s, to be honest, because Unity has other issues with them... Lastly, there are AudioMixers, which allow you to route your AudioSources into common groups (like channels, but not exactly because Unity always has to be weird). In my case, I have one group for BGM and for SFX. The most common reason you would do this is so that you can control the volume of each separately, but there are a ton of other options for people who actually know what they're doing with sound editing.
The volume slider: Every AudioSource has its own volume control, adjustable on a 0-1 scale. That's groovy, except that even when set to 1, the volume is often much too quiet. I don't know why Unity cranked everything down so much (old message board posts reveal that they did so when they upgraded from version 4 to 5), but you're probably going to need to edit your source files outside so that the highest volume really feels like a highest volume. Now, if you want to adjust an entire AudioMixer group, there's another catch: the volume of those is measured in decibels, and that causes problems. Decibels are measured in a logarithmic scale, so you're going to need a conversion to go from your simple Unity slider to the correct scale for a linear change in volume. The conversion will be different depending on the range of values of your slider, but it's easy enough to look up and adjust.
Spatial blend: Probably the most important setting in the AudioSource, this value states whether the sound should be played as 2D or 3D. If it's set to 3D, the sound plays from the world-space location of the AudioSource, so its volume will be dependent on the relative position of the AudioListener and the falloff equation chosen. If it's 2D, the sound plays at a constant volume throughout the level. In my game, I use 2D sounds for all of my menu SFX and background music.
One source or many: Another common question is whether you should use one AudioSource and keep swapping the AudioClip before playing, or use many with their clips preset. Apparently there isn't much difference in performance, so the answer really comes down to usage. For instance, a single AudioSource can only play one clip at a time, so trying to do the switch and play something new will cut the first one off (there is a workaround to this, .PlayOneShot(), but it uses a set of default parameters that can't be accessed or customized like with a normal AudioSource). And if you want to make use of 3D sounds, you'd need at least a separate AudioSource for each relevant position in the world (i.e., one attached to each player for all the sounds they can make). In ACF, I just went with one AudioSource per unique sound, with the result that I didn't have to worry about doing anything with AudioClips in the code; I just set all the references in the editor once and forgot about them.
Multiplayer: The obvious problem with only having one AudioListener is how to structure a multiplayer game. (Note: I'm talking about split-screen here, I haven't looked into networking at all so I don't know how that would work either) We sort-of flubbed this with S.P.A.R.K., and it didn't matter for Trials of the Rift because the entire arena was onscreen at once, so all the audio could be 2D. There are various assets on the Unity store that claim to solve this problem; most of the ones I looked at were huge (and hugely expensive) all-purpose audio managers packed full of features that I really didn't need. The one I decided to go with is called SSAM, and it works by creating virtual AudioListeners, determining which one is closest to the playing AudioSource, and transforming the entire virtual system into a space relative to the single true AudioListener. The potential downside of this concept is that it means each sound will only be heard by the closest virtual AudioListener at the correct volume relative to it, but that wasn't a problem for me simply because of how splitscreen works. A sound playing right next to one player may sound like it's in the distance to another, but that distant sound would be drowned out by it playing at full volume in the first player's listener. Get it? SSAM is also ideal because it doesn't affect 2D sounds at all, and the virtual AudioSources you'll be adding replace the normal ones perfectly, including all attributes and methods. So I'd recommend this asset because it sure saved me a lot of hassle trying to do something similar on my own.
Popping: This is the big one. A quick Google search will tell you that Unity has immense trouble with seamlessly looping audio, and has pretty much since its first release. It seems like this would be a pretty easy thing to get right, what with its prevalence for BGM especially in videogames...but this isn't the first similar comment I've made. The point is that there's often a very audible pop or click when the track loops, even if you've taken care to edit the waveform to the point where it does loop seamlessly when played in a music player. The message boards don't offer a whole lot of help, mainly solutions that only seem to work in specific cases or aren't helpful at all ("make sure you fade in and fade out to remove the pop"... sure, except then it's not a seamless loop either, if the sound goes to zero at some point?), but the one thing that seems clear is that you've got no hope if you're using .mp3 files or anything of lower quality than that. So stick with .wav files and use AudioSource functions like PlayScheduled() to create your own loops, rather than the built-in checkbox. PlayScheduled() takes a dspTime parameter, which is apparently much more accurate than a normal float... You can read more about all of that in this excellent blog post, but I went with another asset that did it all for me automatically.
Pausing: My last nugget of wisdom has to do with the pause menu. Most people will use a Time.timeScale call to pause the game, but this has no effect on AudioSources. AudioSources have their own .Stop() and .Pause() methods to turn off the music, but can you imagine the hassle of trying to find every AudioSource in the game and modify it when a pause happens? Luckily, there's a shortcut in the form of AudioListener.pause. See the difference? Pausing the listener pauses all of the sources at once, in one call. But what about playing music and SFX on the pause menu? Luckily, there's a solution for that too, in the form of .ignoreListenerPause, a value that can be set per AudioSource and does exactly what it says, allowing effects to play through that source regardless of whether the AudioListener is paused or not. Functionality like this is really beautiful, because it's a case of the tool being able to perfectly do exactly what you want from it.