Daguerreotype in Blender Cycles nodes

In late 2019 I came across a daguerreotype on display while visiting a local historical society. I found it amusing how the image appeared in positive when reflecting my dark shirt, but in negative when reflecting the lighter wall behind me.

I got the idea to mimic this using blender’s material nodes, and in half an hour I had it working. Yes, HALF AN HOUR.

Usually nothing I do in blender ever works that simply, but this idea pretty much just worked and when it was done I was almost disappointed that it was sooo easy.

So here’s the recipe:

(other panels on RGB Curves node are untouched, plain diagonal lines)

Use Cycles of course. I just uv-map an image to a flat-ish box like the thickness of a plate of glass. The most difficult part is setting up a background with enough contrast to notice the effect, but just wobbling around the viewport it is plain to see that the effect just works.

And that’s how I make fake daguerreotypes in Blender.

Posted in About Me | Leave a comment

for the love of history, please STOP saying “Nth Century”

What follows is a polemic argument, but I feel it needs to be said and I strangely don’t hear anyone on the internet saying it.

The intended audience is anyone who writes or publishes history teaching materials, and anyone who teaches history.

When writing and speaking, it is often desirable to vary the words

used so as to avoid sounding repetitive. There are at present two acceptable methods for denoting which century is being discussed when speaking of history:

You can say “the 17th century…”

or you can say “the 1600’s” (pronounced “the sixteen hundreds”)

Due to the desire to vary the language and maintain fluidity, these two are often used interchangeably in teaching. This produces the desired fluidity of words, but only at the cost of fluidity of thought.

When we think about it, we know that “1600’s” and “17th century” mean the same thing. Unfortunately our brains are lazy. They don’t always catch the subtle difference and apply the mental resources needed to convert the “Nth-century” notation into the correct span of 100 years.

This is a problem because it kind-of-matters which century events happened in, if you actually care about understanding how events in history fit together.

Personally, I cannot tell you how many times I have made it half-way through a lesson and realized I was thinking of the WRONG CENTURY. This is a problem because, like many (if not most) students of history, the timeline provides mental pegs upon which to hang incoming knowledge. When I find I have been thinking within the wrong century, I have to backtrack and undo all of the connections that my brain was starting to make with the new information. It hurts. Having to do this instantly takes the fun out of what I was learning.

The current two-ways system causes unnecessary mental friction for the learner. For most learners this will not be the straw that breaks the camel’s back, but it is still an unnecessary weight nonetheless, and it provides no real benefit. Of the two methods, the “Nth-century” designation is clearly the inferior. Just try specifying a time span less than a century using it. You can do it, sure, but it’s tedious and sounds archaic.

When I say “the 1800’s”, you don’t have to think. You instantly know which century I’m talking about. If instead I were to say “the seventeenth century” it would probably take you until about here to realize that I am not even talking about the same century. And if you caught it before I said it, you are a more attentive reader than I am. I still wouldn’t have caught it.

What I propose:

Students should still be taught about Nth-Century designation. They need to know what it is and how it works. It hasn’t yet fallen into archaism, but even once reason prevails and it does, we still want students to be able to read and understand old documents. We need to keep teaching about it.

nevertheless…

We should absolutely and completely stop teaching with it.

You wouldn’t write a textbook littered with dates in Roman Numeral format simply to provide variation, because it would make it harder for the students to read. And yet that would arguably do less harm than the current system because it would be difficult to gloss over a Roman Numeral and find yourself learning about the wrong century.

I would like to see all publishers modify their “Style Guides” to disfavor ban the use of Nth-century designation in the future. I think that is the best way, as I would hate for the burden of this to fall on teachers. I would like teachers to be aware of this also, so they can be mindful of this when producing their own lessons.

How to implement:

Best of all, this is an easy change to make. All you need to do is search the length of any text for the word “century” and you will instantly find all locations that require editing, with a very manageable number of false-positives. And since a new edition of every textbook is produced every year anyway … well here’s a change that is super-easy to implement, that makes the book more accessible, easier to understand, and actually makes purchasing the newest edition worth the student’s money.

If you agree, you have my full permission to take this post and run with it. I don’t need credit I just want this idea in front of as many eyeballs as possible. I want to remove friction from the learning experience in any way possible, and I feel this is certainly the low-hanging fruit.

Posted in About Me | Tagged , , | Leave a comment

a tree with arms (video, BTS, sonic pi code)

The Video:

Recently I’ve been diving into the tutorials included in Sonic Pi, and using what I learn to make music (or noise) to accompany the videos I have been making. Since early April (2020) I have been making a lot of videos, most of them with the goal of having them done before midnight, though some have taken a few days. Often I will combine whatever I make in a day, even if it is a little unrelated, into the day’s video(s).

So Yesterday (June 5 2020) I had the day off and I made a strong candidate for “weirdest video I’ve made”:

It is created for VR 180 3D if you have a headset to view it that way.

The Behind-the-scenes:

So here’s the behind-the-scenes:

9am – film initial (unused) video with 3 trees. VR 180 filmed with Vuze XR.

10am While loading video into computer, create the music. I had the idea that I wanted something creepy and microtonal was the way to go. Traditional piano octaves have 12 notes. I considered using 7,15,or 18 but settled on 20. I know 12 is used in most Western music as it can be factored out to 2,3,4,6 and this makes for interesting varieties of distinct chords that our ears can distinguish. Because music theory… er something.

Anyway since 20 can be divided into 2,4,5,10 (and that’s the same number of factors as 12) I figured it could be neat to hear. Down bottom are two code blocks written in Sonic Pi that are the result of this. They both are similar just using different panning effect.

The patterns of chords played are 20 tone variations based on:

C G Am F —> the most overused chord progression because it’s awesome

Dm F Gm Bb —> a pattern I used recently and wanted to hear how it would sound.

Now lets see if I can chart how I worked this out (as my paper diagram was oversimplified and kinda a mess).

The chart below is the regular chords like you would play on the 12 notes of a piano octave. All of the numbers on this chart are their corresponding Midi numbers (which Sonic Pi also uses). The Letters in the left column are the notes. The letters across the top are the Chords.

             C      G      Am    F
______________________________________
71 B             |  71  |      |
70 Bb            |      |      |
69 A_____________|______|__69__|__69__
68 Ab            |      |      |
67 G         67  |  67  |      |
66 Gb____________|______|______|______
65 F             |      |      |  65
64 E         64  |      |  64  |
63 Eb____________|______|______|______
62 D             |  62  |      |  
61 Db            |      |      |      
60 C_________60__|______|__60__|__60__

Now lets match that up to a 20-note setup.

             C      G      Am    F
______________________________________
71.4             |  71.4|      |
70.8             |      |      |
70.2             |      |      |
69.6             |      |      |
69   A___________|______|__69__|__69__
68.4             |      |      |
67.8             |      |      |
67.2         67.2|  67.2|      |
66.599           |      |      |             <---Im not superstitious
66   Gb__________|______|______|______
65.4             |      |      |   
64.8             |      |      |  64.8       <---this could have been one higher also
64.2         64.2|      |  64.2|
63.6             |      |      |      
63   Eb__________|______|______|______
62.4             |  62.4|      |  
61.8             |      |      |      
61.2             |      |      |      
60.6             |      |      |      
60   C_______60__|______|__60__|__60__

If you compare the charts, it’s plain that all I did was for each segment of 3 notes in the twelve, I matched it to a segment of 5 notes in the twenty and substituted low for low, mid for mid, high for high. A couple of the notes (C and A) didn’t change at all, and no single note changed more than 0.4 up or down. It turns out that while it has some microtonal notes, it sounds totally normal to my ears. So much for that creepy tone i was looking for. Gotta resort to minor keys for that I guess.

So here’s the other chords (which I didn’t chart before, but will now):

             Dm     F      Gm     Bb
______________________________________
71 B             |      |      |
70 Bb            |      |  70  |  70
69 A_________69__|__69__|______|______
68 Ab            |      |      |
67 G             |      |  67  |
66 Gb____________|______|______|______
65 F         65  |  65  |      |  65 
64 E             |      |      |
63 Eb____________|______|______|______
62 D         62  |      |  62  |  62
61 Db            |      |      |      
60 C_____________|__60__|______|______

converts to:

             Dm     F      Gm     Bb
______________________________________
71.4             |      |      |
70.8             |      |      |
70.2             |      |  70.2|  70.2 
69.6         69.6|      |      |
69   A___________|__69__|______|______
68.4             |      |      |
67.8             |      |      |
67.2             |      |  67.2|
66.599           |      |      |      
66   Gb__________|______|______|______
65.4         65.4|      |      |  65.4
64.8             |  64.8|      |             <---- this code was not charted by me
64.2             |      |      |                   like this, and chords were 
63.6             |      |      |                   decided by offsets from the bass
63   Eb__________|______|______|______             note. I'm only seeing this chart
62.4         62.4|      |  62.4|  62.4             now.
61.8             |      |      |      
61.2             |      |      |      
60.6             |      |      |      
60   C___________|__60__|______|______

Actual code is down below, but for now back to the timeline. Anyway I wrote the code in the morning. Then I reviewed the footage and realized my head was too far around the tree for it to work. Also I decided the video would be better as a closeup on one tree instead of 3.

12:45pm reshoot the video.

Blank space at beginning intended to be the backgound.

Hands reaching around tree – this time keeping my head out of the way and not doing the finger walking thing but just reaching as far as I could toward the camera immediately. Used a chair for the top few hands. Recorded 10 hands in all.

Leave camera rolling to catch blank shots at the end in case they are needed to match the changing outdoor lighting.

Let dogs out. Encourage them to wander around while myself remaining out of view behind the camera.

Watch as they both sniff around near the tree and walk away, and then a moth flies past. Realize this is perfect and walk away from the camera to keep the dogs with me so the next frames will be empty and easy to edit.

Afternoon: load video into computer. Render out full horizontal slices of it 480 pixels high corresponding to each hand. Original video is 5760×2880 so these slices are 5760×480 each, rendered into ten separate folders indicating which hand (a-j) and how many pixels the slice needs to be offset from the bottom of the frame (800,1200,1400, etc). This is a “do this – walk away and let it render – do that – walk away and let it render” kind of task. Over and over and over again.

7pm: grocery shopping for the 3rd time I personally have been to the store since March.

Some time before and/or after that: Masking the hands. No mask had more than four vertices. None were animated. One hand had to be dropped. All in all I was expecting this to be a painful and tedious process but it required barely any tweaking to get passable – albeit not professional – but “good-enough that mistakes aren’t the first thing you notice” results. I have learned to love editing quick-and-messy, and this was that. Really it went better than I expected it to. I half-expected the project to die in this stage or get stuck here for days, but it didn’t. For that i’m grateful.

10pm-ish: The hard step was easy, now the easy step will be impossible.

Make the mask match the background. Fade in the strips over time before the hands appear, so there are no sudden changes in lighting. Feathered edges should make the mask edges invisible. Reverse the frames at the end to make the hands disappear ( a last minute decision – originally it was just going to end abruptly, but adding the audio made me want to keep the ending long.) I set the music just a bit louder than the background sound, so it blends in but is still prominent. So far so well.

I have 9 layers of hands (image sequences) that need to be stacked onto a base video in Blender’s video-sequence-editor. I use these tools all the time. Alpha-over is second nature. The sequencer is my go-to. Prior to learning ffmpeg, it was the only way I knew to assemble videos (at least since the old days of Windows Movie Maker and scouring the internet for freeware because I was young and naive, but somehow never got a bad virus). This should be easy.

But it’s too much. Apparently splitting 9 layers of offset-and-masked image sequences into separate frames two-frames long isn’t going to work. (The two-frames-long thing was necessary to lengthen the brief duration, and to give it a nice classic monster-movie feel).

Solution: Render the hands separately, then alpha-over them onto the video as one big image sequence, 2-frames each. This is easy.

This isn’t working. It looks wrong. Inside the masks the sky is darker than outside. Inside the masks the ground is lighter than outside.

Solutions?

Premultiply alpha? nope. Worse.

Contrast/saturation/multiply…nope.

Color ramp? Oh so close but no. And then …

Well, I can’t get the masks to disappear into the background, but those arms look like some classic video-game stop motion thing with that particular setting. I just have to render a version like that to see. Save that setting.

Promise myself I might make a version where the masks are invisible and the hands look normal, but for now I can render this out and get something finished today.

Approaching midnight: Finished rendering.

I had rendered it out to .mp4, h.264, Perceptually Lossless. 2.32GB is too big but that was deliberate, as I wanted it to not lose much quality in the recompression (once more by me to h.265, and then again by youtube.)

Now to ffmpeg:

ffmpeg -i fixed_well_sorta11988-15346.mp4 -c:v libx265 -crf 23 -c:a copy output.mp4

And that was it. By about 12:15am today (6/6/2020) I had an 874 mb file ready to upload overnight. Which was good, because I needed to go to bed.

In hindsight: I meant to do a jumpscare with some jarring noise. Didn’t do it. I meant to hide the editing and make it look realistic. I didn’t. Still, the results are interesting and like-ably stylized. On first-viewing in the headset this morning, the flaws were not distracting at all. Also… that tune is catchy. It has been stuck in my head all day.

The Sonic Pi Code:

Sonic Pi Code used to make the audio is below. There were two separate pieces of code recorded and each is copied below. If you have a copy of Sonic Pi 3.2.2 or later they should play just fine as they use no custom samples and are fully portable. Have fun editing them! The only real difference between them is how the panning is controlled in the second one.

# written in 3.2.2 for trees with arms
var = 0
bn = ring(0,12,15,8)
an = ring(4,8,12,17)
lo = ring(60,60.6,61.2,61.8,62.4,63,63.6,64.2,64.8,65.4,66,66.599,67.2,67.8,68.4,69,69.6,70.2,70.8,71.4)
use_synth :piano
with_fx :vowel && :distortion do |r|
  control r, mix: 0.2
  4.times do
    with_fx :ping_pong, mix: var do
      play lo[bn.tick]-12, sustain: 1
      sleep 0.25
      play lo[bn.look]
      sleep 0.25
      play lo[bn.look+7]
      sleep 0.25
      play lo[bn.look+12]
      sleep 0.25
      
      play lo[bn.tick]-12, sustain: 1
      sleep 0.25
      play lo[bn.look]
      sleep 0.25
      play lo[bn.look+7]
      sleep 0.25
      play lo[bn.look+12]
      sleep 0.25
      
      play lo[bn.tick]-12, sustain: 1
      sleep 0.25
      play lo[bn.look]
      sleep 0.25
      play lo[bn.look+12]
      sleep 0.25
      play lo[bn.look+5]
      sleep 0.25
      
      play lo[bn.tick]-12, sustain: 1
      sleep 0.25
      play lo[bn.look]
      sleep 0.25
      play lo[bn.look+12]
      sleep 0.25
      play lo[bn.look+7]
      sleep 0.25
    end
    var = var + 0.3
    
  end
  control r, mix: 0.7
  var = 0.25
  4.times do
    with_fx :ping_pong, mix: 1 do
      play lo[an.tick]-12, sustain: 1
      sleep 0.25
      play lo[an.look]
      sleep 0.25
      play lo[an.look+5]
      sleep 0.25
      play lo[an.look+12]
      sleep 0.25
      
      play lo[an.tick]-12, sustain: 1
      sleep 0.25
      play lo[an.look]
      sleep 0.25
      play lo[an.look+7]
      sleep 0.25
      play lo[an.look+12]
      sleep 0.25
      
      play lo[an.tick]-12, sustain: 1
      sleep 0.25
      play lo[an.look]
      sleep 0.25
      play lo[an.look+12]
      sleep 0.25
      play lo[an.look+5]
      sleep 0.25
      
      play lo[an.tick]-12, sustain: 1
      sleep 0.25
      play lo[an.look]
      sleep 0.25
      play lo[an.look+12]
      sleep 0.25
      play lo[an.look+7]
      sleep 0.25
    end
  end
  var = 1
  4.times do
    control r, mix: var
    with_fx :ping_pong, mix: var do
      play lo[bn.tick]-12, sustain: 1
      sleep 0.25
      play lo[bn.look]
      sleep 0.25
      play lo[bn.look+7]
      sleep 0.25
      play lo[bn.look+12]
      sleep 0.25
      
      play lo[bn.tick]-12, sustain: 1
      sleep 0.25
      play lo[bn.look]
      sleep 0.25
      play lo[bn.look+7]
      sleep 0.25
      play lo[bn.look+12]
      sleep 0.25
      
      play lo[bn.tick]-12, sustain: 1
      sleep 0.25
      play lo[bn.look]
      sleep 0.25
      play lo[bn.look+12]
      sleep 0.25
      play lo[bn.look+5]
      sleep 0.25
      
      play lo[bn.tick]-12, sustain: 1
      sleep 0.25
      play lo[bn.look]
      sleep 0.25
      play lo[bn.look+12]
      sleep 0.25
      play lo[bn.look+7]
      sleep 0.25
    end
    var = var - 0.25
  end
end
# written in 3.2.2 for trees with arms
var = 0
panr = ring(1,0.6,-0.2,-0.8,-1,-0.6,0.2,0.8)
bn = ring(0,12,15,8)
an = ring(4,8,12,17)
lo = ring(60,60.6,61.2,61.8,62.4,63,63.6,64.2,64.8,65.4,66,66.599,67.2,67.8,68.4,69,69.6,70.2,70.8,71.4)
use_synth :piano
with_fx :vowel && :distortion do |r|
  control r, mix: 0.2
  4.times do
    with_fx :ping_pong, mix: var do
      play lo[bn.tick]-12, sustain: 1, pan: panr.tick
      sleep 0.25
      play lo[bn.look], pan: -(panr.tick)
      sleep 0.25
      play lo[bn.look+7], pan: panr.tick
      sleep 0.25
      play lo[bn.look+12], pan: -(panr.tick)
      sleep 0.25
      
      play lo[bn.tick]-12, sustain: 1, pan: panr.tick
      sleep 0.25
      play lo[bn.look], pan: -(panr.tick)
      sleep 0.25
      play lo[bn.look+7], pan: panr.tick
      sleep 0.25
      play lo[bn.look+12], pan: -(panr.tick)
      sleep 0.25
      
      play lo[bn.tick]-12, sustain: 1, pan: panr.tick
      sleep 0.25
      play lo[bn.look], pan: -(panr.tick)
      sleep 0.25
      play lo[bn.look+12], pan: panr.tick
      sleep 0.25
      play lo[bn.look+5], pan: -(panr.tick)
      sleep 0.25
      
      play lo[bn.tick]-12, sustain: 1, pan: panr.tick
      sleep 0.25
      play lo[bn.look], pan: -(panr.tick)
      sleep 0.25
      play lo[bn.look+12], pan: panr.tick
      sleep 0.25
      play lo[bn.look+7], pan: -(panr.tick)
      sleep 0.25
    end
    var = var + 0.3
    
  end
  control r, mix: 0.7
  var = 0.25
  4.times do
    with_fx :ping_pong, mix: 1 do
      play lo[an.tick]-12, sustain: 1, pan: panr.tick
      sleep 0.25
      play lo[an.look], pan: -(panr.tick)
      sleep 0.25
      play lo[an.look+5], pan: panr.tick
      sleep 0.25
      play lo[an.look+12], pan: -(panr.tick)
      sleep 0.25
      
      play lo[an.tick]-12, sustain: 1, pan: panr.tick
      sleep 0.25
      play lo[an.look], pan: -(panr.tick)
      sleep 0.25
      play lo[an.look+7], pan: panr.tick
      sleep 0.25
      play lo[an.look+12], pan: -(panr.tick)
      sleep 0.25
      
      play lo[an.tick]-12, sustain: 1, pan: panr.tick
      sleep 0.25
      play lo[an.look], pan: -(panr.tick)
      sleep 0.25
      play lo[an.look+12], pan: panr.tick
      sleep 0.25
      play lo[an.look+5], pan: -(panr.tick)
      sleep 0.25
      
      play lo[an.tick]-12, sustain: 1, pan: panr.tick
      sleep 0.25
      play lo[an.look], pan: -(panr.tick)
      sleep 0.25
      play lo[an.look+12], pan: panr.tick
      sleep 0.25
      play lo[an.look+7], pan: -(panr.tick)
      sleep 0.25
    end
  end
  var = 1
  4.times do
    control r, mix: var
    with_fx :ping_pong, mix: var do
      play lo[bn.tick]-12, sustain: 1, pan: panr.tick
      sleep 0.25
      play lo[bn.look], pan: -(panr.tick)
      sleep 0.25
      play lo[bn.look+7], pan: panr.tick
      sleep 0.25
      play lo[bn.look+12], pan: -(panr.tick)
      sleep 0.25
      
      play lo[bn.tick]-12, sustain: 1, pan: panr.tick
      sleep 0.25
      play lo[bn.look], pan: -(panr.tick)
      sleep 0.25
      play lo[bn.look+7], pan: panr.tick
      sleep 0.25
      play lo[bn.look+12], pan: -(panr.tick)
      sleep 0.25
      
      play lo[bn.tick]-12, sustain: 1, pan: panr.tick
      sleep 0.25
      play lo[bn.look], pan: -(panr.tick)
      sleep 0.25
      play lo[bn.look+12], pan: panr.tick
      sleep 0.25
      play lo[bn.look+5], pan: -(panr.tick)
      sleep 0.25
      
      play lo[bn.tick]-12, sustain: 1, pan: panr.tick
      sleep 0.25
      play lo[bn.look], pan: -(panr.tick)
      sleep 0.25
      play lo[bn.look+12], pan: panr.tick
      sleep 0.25
      play lo[bn.look+7], pan: -(panr.tick)
      sleep 0.25
    end
    var = var - 0.25
  end
end
Posted in About Me | Leave a comment

FFMPEG memo

The thing I can never remember how to do: Change framerate so it changes video speed without dropping or interpolating frames. The key is to set the framerate of the input using -r 24 (or whatever fps you want it to set to) before your input -i input.mp4 like so:

ffmpeg -r 24 -i input.mp4 -an output.mp4

This does re-encode the video, which stinks.

Used -an to remove the audio for simplicity. There are ways to adjust it too but I won’t get into that here. It can be added back in later without another video-encode anyway (provided the edited audio is compatible with the container.)

Posted in About Me | Leave a comment

Ambisonics Done Manually

Disclaimer: I am not an audio professional. I am improvising and really just having fun. This waves.com article (also linked down bottom) is a good place to start if you are looking for serious info on ambisonics.

What are ambisonics? They let you turn your head in VR video and if yeou have headphones the audio will sound like it’s coming from the direction it should be.

 

So a while back I saw a youtube spec on how to upload spatial audio. I was thinking it was like surround sound – put a virtual speaker here, put a virtual speaker there – but no.

NO, it’s not like that.

The way the audio channels were supposed to be mapped looked like some weird electron probability field. (I’m not a physicist either, so it looked complicated.)

Well, I did a bit more reading. All the tools out there for creating stuff with ambisonics cost money, and are (sensibly) geared toward audio professionals. I just wanted to play with it, so that wasn’t for me.

Note that the end goal is to position audio as coming from any direction – any point on a 3d sphere surrounding your head. Distance isn’t really factored in here, just direction. If I wanted something to sound quieter with distance, I should make it quieter before starting the process.

 

Using MuseScore:

First I split an old midi file I had made (back on January 4 2006) into separate tracks for each instrument, and for extra credit split between hi and low notes. I exported them to .wav

I exported one .wav with all of the audio also for later use (as the “W” channel).

Using Blender:

I set up objects in 3D space around a VR camera and timed their Scale/Rotation/Material to the audio of the respective tracks by baking the audio to F-curves in the graph editor.

(for efficiency I rendered each object in a small frame then put them back together in the sequencer since most of the scene is blank space)

I rendered the video out to a silent (no audio track) video.mov and set it aside on a dusty top shelf above-eye-level next to a mousetrap for later.

About the Audio –

First-order Ambisonics (the simplest, and the ones compatible with youtube) require four audio channels.

The first channel is the easiest. It is labelled “W” and is just a “positive” mix of all the audio.

The other three channels represent the 3-axis around the camera.

That “Positive” is important as we are dealing with “Positive” and “Negative” audio here. Like positive and negative numbers, they have absolute-values that are equal and they would cancel each other out if combined.

I had long thought that the “Invert” function in audacity was useless because it didn’t perceptibly change the sound I heard. Now suddenly it is vital. Inverting a sound makes it “Negative”. It flips the wave, so peaks become troughs and troughs become peaks.

So back to the four channels.

W – All sound at full volume – in the positive no matter what direction it comes from.

Y – the “LEFT” channel – Sounds to the left of the camera are positive, sounds to the right of the camera are negative. Sounds closer to the camera are quieter.

Z – the “UP” channel – likewise but up is positive, down is negative.

X – the “FRONT” channel – likewise but front is positive, back is negative.

Using Audacity:

Import original “W” track and “Amplify” it all the way down to silence. I’m not sure if this was necessary but I kept it on to so none of the tracks would stop short.

To make the Y (LEFT) axis track:

One at a time import the tracks corresponding to the objects in blender. Use “Amplify” to reduce audio the closer the object was to the camera along the axis in question. (“How much” was just determined by guessing and eyeballing it. Still I measured and kept my measurements consistent so as to make sure objects landed approximately where they should on the imaginary audio-sphere when all the channels combine to make it happen.) It’s counter intuitive, but for these 3 axis-es, CLOSER MEANS QUIETER.

An object 6 feet left-of-camera would be amped down by -2, whereas an object 3 feet left-of-camera would be amped down by -5. And object 2 feet right-of-camera would be amped-down -6 and inverted. An object directly in front of the camera (neither right nor left) would not need its audio included in this track.

Export mapping all audio to one mono track to a file I called y.wav

Delete all but the silent first track used for length control and repeat this process for the Z and the X axis.

USING FFMPEG:

Set audio rate of each file to 48000:

ffmpeg -i input.wav -ar 48000 output.wav

Reach up to dusty top shelf above-eye-level. OUCH!!! Regret setting that mousetrap! Grab video file video.mov and get ready to combine it with the audio.

ffmpeg -i video.mov -i w.wav -i y.wav -i z.wav -i x.wav 
-filter_complex "[1][2][3][4]amerge=inputs=4[aout]" 
-map "0:v:0" -map "[aout]:a:1" 
-c:v copy -c:a pcm_s16le output.mov

Then run it through youtube’s vr video metadata tool, checking the appropriate boxes (all three in my case).

And that was it!

Note that it takes a bit of time for youtube to process the audio and when I first uploaded it it was just stereo unaffected by motion. That quickly resolved once the video had time to process out on the youtube servers in cloud land.

 

Links that were very helpful:

On youtube spatial audio:

https://support.google.com/youtube/answer/6395969?hl=en

 

On youtube VR:

https://support.google.com/youtube/answer/6316263

 

Excellent article about ambisonics:

https://www.waves.com/ambisonics-explained-guide-for-sound-engineers

 

Helpful in understanding:

https://www.justmastering.com/article-phase-and-polarity.php

 

 

 

 

Posted in About Me | Leave a comment

Working (a little) color into Autostereograms

output_donutMix_0014_.jpg

tl,dr: keep the random noise de-saturated and limit its value range, and you can get away with putting a very limited amount of the original colors back in provided you run them through the same algorithm and let them repeat/fade off to the side a bit. There will be some ghosting of this, so it’s a balancing act. This image is 15% full color and 85% random noise.

 

Now to the long story:

About Christmas 2016 the algorithm that makes out-of-the-blue youtube suggestions got something very-right for a change. It offered me this long but fascinating video:

 

I watched it and realized that whatever this language “Processing” was, it could push individual pixels around, and this was EXACTLY what I needed to make autostereograms.

PREVIOUSLY, ON ROBBIE’S LIFE: A few years ago, I spent two weeks desperately failing to make autostereograms using Blender’s node system. I got oh-sooo-close but the final result would always have vertical bars that were stuck at the background depth. I can work with limitations, but this was just ridiculous. What was I supposed to do, just have objects bounce up and down while staying quarantined in their own vertical spaces?

 

Around the same time I found a free program online that could make autostereograms from a depth map and a vertical column of wall-paper, but it was clunky for me to use and it had a major drawback of starting from the left side of the image (as opposed to the center.)

Back to the present time:

I spent the 12 days of Christmas (through January 6) engrossed in code and drinking too much keurig coffee and staying up as late as one can without impairing his ability to function at work the next day.

There were some funny hurdles that required thinking around. To enumerate them as I remember them:

I had to figure out how to start the code from the center of the image. Sounds simple, but telling it to “work to the left from center, then work to the right from center” would leave a center area where the offsets are not related to each other and just don’t work. The solution was to halve the offset in both directions for the center bit and then proceed outward from there. (“Do Center portion this way, then Do Left and  Right portions that way.”)

Once I got it working and almost breathed a sigh of relief, there was another problem: the stair-stepping problem was severe. I was working with integers for pixel offsets, so depths were appearing at integers instead of being a smooth gradient. I needed to work with floating-point numbers and have them mix the offset color with two pixels according to where the floating-point number ended up between them. This meant I had to keep an array of the strength of the particular colors I was putting in each pixel, so when throwing a new color onto a pixel that already had some, I could know in what proportion to mix it. (The implementation of this was so tricky that I have yet to apply it to the full-color portion.) This floating-point solution did not fully resolve the stair-stepping issue but it makes the stair-stepping less obtrusive, and I don’t have any better ideas at the moment. (Watching as it renders, it just seems to fill the gaps with spiderman-webs. Fine details of the depth map are still lost. Increasing the resolution helps some, but computing time increases accordingly.)

 

FINALLY — THE COLOR!

I had the idea that if the full color image could be repeated in a fading fashion, it might be a nice way to get a bit of the original color into the final image.

something like this:

 Full color image -->                           Object
 Generate this overlay -->     Object25% Object50% Object50% Object25%
 You see this -->     FainterGhost  FaintGhost  Object  FaintGhost  FainterGhost

In practice this involves two separate passes that each create double-vision, so it’s sort-of a four-eyed double vision. (I wear glasses. I can make those jokes.)

 

This is then mixed with the random noise, but it works best when the randomness is limited to lower saturation and higher brightness.

LAYMAN'S TERMS: 

HUE == what color of the rainbow is it? each number is a color along the rainbow.
SATURATION == how much is it that color as opposed to gray? Higher number = more color.VALUE == how bright is it? Higher number = brighter.

The random noise is generated according to the following recipe:

 

RANDOM NOISE RECIPE

Hue = random number between 0 and 255 (full range)
Saturation = random number between 0 and 127 (half range) – lets colors of original image stand out more.
Value = random number between 50 and 255 – no very-dark pixels.

I mixed it with various percents. With the image above it is 15% full color (four-eyed) and 85% random-dot recipe. I don’t know yet if different images will yield better results with different percents, but it can’t be too different. The more color you mix in, the more ghosting.

 

 

TO DO LIST: (ideas for improvement, some of which I may not get to)

  1. Tidy up the code, make variable names make more sense, remove unneccessary bits.
  2. The current offset varies from 5/8 to 8/8 of the wallpaper width. This is based on how I used to draw 3d pictures in my journals using a ruler, with 5/8 inch being for the nearest parts of the picture, and a full inch being for the background. It might be better to change this up a bit, maybe make the 5 a variable I can change and try out different values. Maybe a 6/8 limit will be easier on the eyes?
  3. Currently the wallpaper is just a placeholder for the width. (Wallpaper textures I started with lacked sufficient detail, so for testing purposes I just switched to random noise.) I want to get back to controlling the image that repeats a bit more, maybe mixing a column with random noise. Another idea is to have a separate full color image that is mostly alpha but has a few crucial details that I want to repeat in the right positions. I could have the Center algorithm look ahead and grab colors from any non-alpha-Zero pixels and include them (probably requiring another array.)
  4. My depth map creation method (in Blender) needs to be improved so it is easier to reuse and yields better results. (The floor in the coffee cup example was flat, but the depth map falloff bends it oddly.)  There could be some way to fix bad depth maps in code by adjusting values along some sort of curve, but it is probably easier to just create a decent depth map to begin with.
  5. Animation, of depth map and wallpaper and full-color image
  6. Kinect2 depth map/color? mua ha ha ha (I don’t have one, and the needed adapter is on backorder anyway, but I can dream.) Plenty to do in the meantime. Of course this would probably be best as a separate program, since I have no idea how to begin to make stereogram generation a real-time process.

 

Posted in About Me | Leave a comment

Getting 1.44 inch ‘Small’ PaPiRus working

I am new to Raspberry Pi, so forgive me if this is clunky. Anyhow, the box of parts that came with my Papirus “hat” was a little daunting, and it took a few days to get the screen intelligible. So here’s what I wish someone on the internet had told me, all in one place.

Notes – I’m using Raspberry Pi 3 running Raspbian operating system with Pi Supply PaPiRus ‘Small’. You will need internet access to do this too… guess that’s not a problem if you’re reading this.

 

Physical Assemblage

Obviously assemble it unplugged and ground yourself first so you don’t damage the electronics with your ‘energetic’ personality. (static electricity I mean)

The four ‘switches’ and that brass looking pin appear to be optional. Unless you know what they are for, don’t worry about them. I don’t think they matter, but I still haven’t figured out what they are for.

The latch that connects the screen to the PiSupply PaPiRus board is – well, a latch. Just sliding the screen into it won’t hold it unless you open it, latch the screen connector in, and relatch it.

I used the four double-sided sticky-pads to attach the back of the screen to the top of the board.

The plastic screws and columns work nice to connect the Papirus Hat to the Pi. Just be aware the Official Case will no longer fit your Pi with the PaPiRus hat attached. Oh well.

 

The Software Fun – Getting the Small 1.44 inch screen set to the right size.

My two sources for this:

https://github.com/PiSupply/PaPiRus
And Page 19 of the PDF from here:
https://learn.adafruit.com/repaper-eink-development-board-arm-linux-raspberry-pi-beagle-bone-black/wiring-the-raspberry-pi-1
Retracing the steps I took (some of this may be unnecessary. Hopefully I didn't leave anything important out.)

Open a terminal.

sudo apt-get python imaging
sudo git clone https://github.com/PiSupply/Papirus.git
cd PaPiRus
sudo Python setup.py install
sudo papirus-setup  

sudo nano /etc/default/epd-fuse

using arrows to navigate change EPD_SIZE=2.0 to EPD_SIZE=1.44
and make sure that line doesn't have a # in front of it. (If it does, 
remove the #.) 

Hold control and press the letter "O" to save this change
Hold control and press the letter "X" to exit 

sudo service epd-fuse start
cat /dev/epd/panel/

sudo papirus-set 1.44
sudo papirus-write "Shh.. This is a secret message."

 
(picture of display once I got it working )

The text can stay displayed even once you shut down your pi and disconnect the power (in that order, PLEASE!!!)

  AND that's it. Some of that may be unnecessary - I was just retracing my steps. Hopefully I didn't leave anything out. Again, links to my two sources for this: https://github.com/PiSupply/PaPiRus And Page 19 of the PDF from here: https://learn.adafruit.com/repaper-eink-development-board-arm-linux-raspberry-pi-beagle-bone-black/wiring-the-raspberry-pi-1 Hopefully that helps save you a little frustration and contributes in some small way to the continued development of eink technology. (I just hate backlit screens :-)

 

 

Posted in About Me | Leave a comment

Using nodes to illustrate the paths of snowflakes…

Here’s the step-by-step. It assumes you know your way around Blender. Even if you don’t, just look at the illustrations and it will start to make sense. Skip them and it won’t make any sense. At all. 😉

1. Shoot video of snowflakes falling. The camera can’t be moving or this won’t work. Also, the background should be darker than the snowflakes themselves. This shouldn’t be too much of a problem as snowflakes tend to be pretty white. Just be aware that they won’t really be visible against the sky or the snow-covered ground.

(I filmed it slow-mo at about 120fps, but this is not totally necessary.)

2. Pull the video into Blender’s Video Sequence Editor (VSE) and render out the individual frames to separate images (I use .jpg to save space and time). This just makes it easier for the computer to work with.

3. Break out Blender’s Node Editor. This is where most of the work will be done.

ITERATION ONE –

Nodes

These are the nodes for the first iteration.

The image-sequence from the last step is used for all five inputs, each offset by one frame. (In this case, an initial offset of 500 was used to avoid any shaking at the beginning of the video, so the actual “offset” settings are 500, 501, 502, 503, and 504.)

The “Lighten” nodes combine all of the inputs, comparing each pixel in the images, and choosing the lightest pixel in each given position for the output. If one input were black and the other gray, gray would be the output. If one input were white and the other gray, white would be the output. Since the snow is lighter than its background, the snow wins out. The output from this first iteration is a series of frames, each depicting the short path of the snow over five of the original input frames.

ITERATION TWO –

b

These are the nodes for the second iteration. Almost the same, but notice the “offset” is different by five from one input node to the next.

The nodes for this second iteration take as their input the frames that were the output of the previous step. It is almost the same setup, but this time the offset differs by five from one node to the next. The result is an output with snow-trails that are 25 frames long. If you want to run it through another iteration, that will be your offset.

b

One, Five, Twenty-five, and 125 Frames, respectively.

TOP LEFT: Regular video frame.

TOP RIGHT: After first iteration, 5 frames in one.

BOTTOM LEFT: After second iteration, 25 frames in one.

BOTTOM RIGHT: After a third iteration, 125 frames in one.

This video is – I think – 50 or 75 frame snow-trails, in motion.

Now this in itself was fun, but I got to playing around and found another node setup that made this look more artsy, like the crayon wielding child that I am.

CRAYON:

Take the output of Iteration two, and input it to a new set of nodes:

S

This is just one setup. Nothing “correct” about it. There is a 25 frame gap between trails of blue, green and red. I later discovered that you can overlap them too, yielding a couple more colors.

Basically this has two parts:

First it turns the 25-frame snowtrail into the following pattern: 25 frames blue – 25 frames nothing – 25 frames green – 25 frames nothing – 25 frames red. (See the RGB nodes)

Second, it cuts out the “Value” (lightness/darkness) of the overall image and replaces it with a 50% gray, making it look like a sheet of paper. (See the HSV nodes)

What was recorded at 120fps is played at 12fps, so it is one-tenth realtime. (I would have made it 24fps, but the video codec chews up the crayons at that pace and the result is ugly.)

ANYTHING BELOW THIS POINT IS AN AD PLACED BY WORDPRESS.

I HAVE NOTHING TO DO WITH IT.

I SWEAR.

(BUT I TRY NOT TO. BUT OCCASIONALLY I DO.)

Posted in About Me, Blender | Leave a comment

Quick ref for my CHDK on Canon SX510HS

Just some notes to make the process quicker for myself next time:

In using STICK to setup card
1. give it a picture copied straight from the sd card without using the usual windows import thing
2. make sure to start STICK from file named stickx.cmd or you won’t have the permission to do anything with the card
3. I used version 101a, renaming the manually downloaded file to test101a.zip and putting it in the stick folder. Not sure if this was necessary or even what was finally loaded onto the camera once I got it working though.

Posted in About Me | Leave a comment

Kino > Export > Other

Today I received in the mail my PYRO A/V link, which allows me to run my old VHS creations into my new computer via firewire. My new computer (the only one I have with firewire) runs on Linux Mint. I was expecting a little bit of hassle to get it working. Really, it wasn’t terribly hard, due to the guidelines some guy named “Rob from United States” posted on a forum over 6 years ago:

“I noticed that Linux was not listed for compatibility. I
have used the ADS Pyro A/V Link under Linux successfully.
I needed to compile and install the IEEE1394 driver
modules, which weren’t immediately available in my distro,
but once that was done I had zero trouble capturing video
from my Hi8 camcorder and a VHS VCR. I have gotten very
good quality captures, with little or no frame drop.
I use Fedora Core 2, mjpegtools, kino, vlc, and growisofs,
along with supporting tools and libraries.
My only problem initially was recognizing that the box had
to be triggered to send frames using “dvcont play” before
I could start capturing frames.
I am very happy with the performance of this device.
Comments posted by Rob from United States, October 23, 2004:
Compatibility: Win95? Win98? Win2K? WinXP? Vista? NT4? MAC? Linux – Rated: 8 of 10.”

(quoted from http://www.videohelp.com/capturecards/adstech-pyro-a-v-link/258 )

After a bit of fussing I got it working, and am rather impressed. But the trouble with video is not just getting it into the computer, but getting it compressed enough so it can actually stay there.
So here I am recording some simple “bracketing” tests that I am doing, to remember which compression works best for me. Each of these tests is on the same three minute video. (Please note that these tests are not carefully controlled benchmarks, but are merely intended to give me an idea of my options.)

DVD-Video Dual Pass (FFMPEG)

  • VOB 2m13s 135.6mb

DVD-Video Single Pass (FFMPEG)

  • VOB 1m19s 135.6mb

Flash Dual Pass (FFMPEG)

  • Broadband Quality FLV (medium size, 564 kb/s) 1m53s 18.5mb
  • Low Quality FLV (small size, 12fps, 128kb/s) 0m57s 4.1mb
  • Broadband Quality SWF+XHTML (medium size, 564 kb/s) 2m04s 12.3mb
  • Low Quality SWF+XHTML (small size, 12fps, 128 kb/s) 0m59s 2.8mb

HuffYUV AVI (FFMPEG)

  • Native size 0m52s 2.0gb
  • Full size 0m54s 1.8gb
  • Medium size 0m39s 535.1mb
  • Small size 0m30s 165.6mb

MPEG-4 AVI Dual Pass (FFMPEG)

  • Best Quality (native size, interlace, 2240 kb/s) 2m21s 48.4mb
  • High Quality (full size, progressive, 2240 kb/s) 2m08s 48.4mb
  • Medium Quality (medium size, progressive, 1152 kb/s) 1m29s 25.0mb
  • Broadband Quality (medium size, progressive, 564 kb/s)1m28s 12.4mb
  • Low Quality (small size, 12fps, progressive, 128 kb/s) 0m41s 2.9mb

MPEG-4 AVI Single Pass (FFMPEG)

  • Best Quality (native size, interlace, VBR) 1m33s 525.9mb
  • High Quality (full size, progressive, VBR, QPEL) 1m41s 385.4mb
  • Medium Quality (medium size, progressive, VBR) 0m49s 53.7mb
  • Broadband Quality (medium size, progressive, 564 kb/s) 0m47s 12.5mb
  • Low Quality (small size, 12fps, progressive, 128 kb/s) 0m23s 3.0mb

Ogg Theora (gstreamer)

  • Best Quality (native size) 4m18s 375.9mb
  • High Quality (full size) 3m35s 174.9mb
  • Medium Quality (medium size) 1m25s 72.7mb
  • Broadband Quality (medium size, 564 Kbps) 1m34s 12mb
  • Low Quality (small size, 128 Kbps) 0m46s 3mb

Quicktime DV FFMPEG

  • [No Profile Available] 0m08s 650.4mb

VCD (FFMPEG)

  • [No Profile Available] 0m46s 30.0mb

XviD MPEG-4 AVI Single Pass (MEncoder)

  • High Quality (full size, VBR, QPEL) 3m47s 292.2mb
  • High Quality (full size, 2240 kb/s) 2m10s 48.2mb
  • Medium Quality (medium size, VBR) 1m09s 42.4mb
  • Medium Quality (medium size, 1152 kb/s) fails after a few seconds no file output
  • Broadband Quality (medium size, 564 kb/s) 0m53s 12.4mb

My choice(s) is colored green and underlined. It is of course not perfect, but it is the best compromise for me, as I prize both my computer space and my video quality. The only compressions that looked better were the ones that were scarcely compressed at all. At that rate a video of a little over 45minutes uses 727.4mb, so I can safely expect to fit about hour per gigabyte. Not the greatest, but by no means horrible either.

As a bonus, I tested importing it into Blender’s Video Sequence Editor (VSE), and it worked perfectly (when I set the framerate to 29.97 before importing the video, of course).

Posted in About Me | Leave a comment