show and tell for hackers in DC.

Round 65: Thymezone Libraries

19 Feb 2019

Big thanks to WeWork for hosting.

https://www.wework.com/buildings/chinatown–washington-DC

Want to be an official sponsor? Check out

https://dc.hackandtell.org/sponsor.html

Notes copied directly from the hackpad!

Joey Shepard

I made a program to simulate peripherals (RAM, ROM, LEDs, buttons, screens, etc) on my PC for use with a real 6502 processor connected to a microcontroller over USB.

The first calculator users a microcontroller, clock, flash drive, and IO. 1 chip, 1 LED, but reading a big RAM chip was problematic because he needed to

Took 49 cycles to read one byte – that’s slow!

Microprocessor – now it only takes two cycles, a huge improvement in performance! But we need many different chips to make this

When two chips try to talk over one another, they could be damaged! The decoder chip handles the traffic patterns.

Now we’re up to seven chips just to blink an LED! If you get one of the (many) wires wrong and the chips

Microprocessor + Microcontroller + Virtual Flash chip

Push buttons, toggle switches,

Ben Robinson

They Exist: The Analytics of StoryCorps Interviews

https://twitter.com/benj_robinson

I scraped the StoryCorps application’s website (using their WordPress API) to get data on over 100K interviews done by regular people like you and me since 2015.

Studs Terkel, interested in the personal histories of regular people; created an initiative to collect people’s stories. They developed a mobile application so that anyone could participate and record a storycorps interview.

There’s so much in the archives!

The app publishes through the WordPress API to their website, so there’s tons of data available.

Then created a dataframe in R. It’s an interview-level, question-level dataset.

Too big to put on Github! Roughly 1 million records.

You can record which questions you asked in the interview.

Top 10 questions

How would you like to be remembered; can you describe your happiest memories; what are you proudest of? Etc.

We don’t have the computing power to analyze all of the audio (that’s a lot of audio interviews!) but we can analyze the other types of data.

Doesn’t have the transcription of the interviews themselves, but there is a link to each interview’s audio file.

Used the tinytext package and removed the recommended Stop words.

Not everyone fills out the location field or all of the data, and it’s not uniformly in English either.

There are tags/categories associated with the stories, but this hasn’t been analyzed yet.

Aaron Schumacher https://twitter.com/planarrowspace

I reverse-engineered the “MyIntuition” college cost estimator!

https://myintuition.org/quick-college-cost-estimator/

[Notes deliberately omitted by request]

Brandon Walraven

Word2Vec as a Linguistic Fingerprint (work in progress)

Capstone project: all of the suspended twitter accounts from 2016; used Word2Vec – could we use Word2Vec to generate a “fingerprint” of a person’s writing to determine that they’re the same person?

Based on the context of the words around it, it assigns a vector with the dimensions.

Example: Take King, subtract the vector for man, add the vector for woman, it equals queen.

Build individual models, Get similarity for shared vectors, Average similarity for all vectors, Print out which users are the closest, and they’re probably the same person.

Compare the same person to themselves with data removed. Similarity ends up being ~.85 at first; then with the data frame sorted, the model.

Cosine similarity may not be the best metric; Word2Vec models may be too dependent on the conditions they are trained on to be directly transferable.

Stop words were not removed, since the process was trying to build the context.

Word frequency could be other things to look at.

Possible: Berger-Ohmo - using pre-trained models.

All tweets are sorted chronologically.

Entire dataset was 200k tweets; about 30k tweets

Eric Haengel https://twitter.com/EricHaengel

I hacked together some hardware from a few different projects to make a RF signal level to light converter. If I turn on a radio transmitter near my hardware the lights turn red, and if I move it away they turn blue. Not super complicated, but kind of fun.

Works as a radio engineer; has been on a kick where he takes signal and turns it into light.

It’s a fun project: the light

Blue (low/no RF signal) - Green (medium - RF signal) - Red (high signal)

RF hooked up to an analog digital converter, connected to a raspberry pi, connected to an

Works with any kind of RF source.

Stephen Trickey

Adafruit’s Circuit Playground Express with a Hot Potato “app”

If you put it in free-fall, it changes color, but if it turns red, you got the hot potato and you lose the game. Challenges with teaching kids to code: they’re not great typists and resources can be an issue. Sometimes they can have a short attention span too.

But you don’t have to reinvent the wheel – Scratch is a great language for this!

Adafruit’s Circuit Playground Express (~$25) packed with sensors and gizmos, you can program it with Makecode; Javascript; or Circuit Python.

There’s tons of documentation, resources, and tutorials for how to play with all of their hardware.

Adafruit’s tutorials are a great way to get into hardware!

Makecode – an easy block interface similar to Scratch. Has a simulator; easy to download the code to the device.

Makecode.adafruit.com

Lennon

http://www.lennonenglish.com/

I’m going to demonstrate a great way to put down song concepts in a short amount of time

Lennon plays 15 instruments but can’t carry all of them together!

Playing between DC, Atlanta, NYC, how can I write music on the go?

Digital Audio Workstations like FLStudio are a way of creating music with a number of different instruments.

Set the tempo to 114

James Brown – set the pockets at 2s and 4s – that’s where the snares will go.

Drop things into them where they would go in the time signature; here we’re using 4/4 time.

Adding high-hats and pianos

Add B, D, F

Add Citrus - it’s a piece of software you can use to oscillate the notes. It’s in the wrong key at first but we can adjust it!

Open source options: Loop mechanism, BDRC

Virtual instruments.

You can also create your own instruments, like even using the tinkling on glass.

There’s 99 tracks inside each of those tracks so you can layer.

Steinberg’s QBase

An article came out last week about a music that was generated by an AI; is it still art?

Using AI, maybe we could stretch the imagination and make predictions beyond what they’ve seen.

And AI can hear notes and frequencies that we can’t; you can use a predictive analysis to look at mastering; auto-mixing. Looking at songs in the same tempo as yours. All of the happy songs right now are in the key of C; right now an AI would tell you to make a [successful] happy song, put it in the key of C.

New Window; QBase, and a few others will allow you to programmatically make those changes.

44.1 MHz range.

Travis Hoppe https://twitter.com/metasemantic

tripGAN: A exploration into some of the weird latent spaces of styleGAN

https://thoppe.github.io/Presentation_Topics/HnT_65_trypophobia/index.html

Deep learning is not a tool to solve a problem! Deep learning is a mirror (of our data); uncanney valley is a roadmap.

If we feed it celebrity face data, we get to see what more celebrity faces look like.

Adding the FlickrHQ data, there’s a bit more diversity.

What if you took all of the eyes and move them around the latent space?

Creepy!!

Jay Kay

This old radio shack RC car from the 90s can recognize my cat.

Car + Computer with GPU + Camera + HDMI transmitter

A Radio Shack RC car, with a computer mounted on it. The car has 8 1.5V batteries; the HDMI transmitter has its own battery; the computer (with GPU) has another battery.

The camera is mounted at the right height to recognize Jay’s cat.

Mobilenets vision algorithm, built for low size, weight, and power devices.

A tiny computer running at 10 watts

Nvidia TX2 1090; has a bluetooth speaker for extra fun.