There were announcements from two meetup friends of Hack and Tell, both worth checking out:
On to the hacks:
Drew Mitchell @drew870mitchell
Inspired by Karpathy’s The Unreasonable Effectiveness of Recurrent Neural Networks, using some torch-rnn code and personal instant message history, Drew trained his computer to generate text like he might write. It isn’t always perfect, but it can be pretty good!
“nah Twitter is buying your mind”
“but I do have to get an occupation”
Aaron Schumacher @planarrowspace
Can we simulate interdimensional cable by finding truly random YouTube videos? Turns out there is some security by obscurity, which Aaron found out the hard way. Fail fast, they say! You can still watch pseudo-random YouTube.
Ali Spittel @aspittel
When you’re learning new things, you might as well write your own custom blogging platform! Why not use Gatsby, a progressive static site generator for React, which uses GraphQL, among other things? Ali has done all this, and more!
Nathan Epstein @Aeium
Thanks to Nathan, big games at The National Go Center are now double-gamified: spectators have their own meta-game to play while the matches are broadcast on Twitch! With go-bet, observers can chat and place “bets” on where the actual players will move next. It makes the experience way more fun!
What do you learn when you try to reimplement DeepMind’s Deep Q-Network reinforcement-learning approach to play Atari Breakout in OpenAI’s gym? You learn there are lots of details you have to get right!
Which pokemon are the same height as you? Or the same weight as you? Now you can find out, by using Nhi’s app (frontend/backend) called PokeHeiWei! The React/Express app accesses data from the pokemon API stored locally in MongoDB.
Aditya Jain @whaleandpetunia
“On March 31 1939, when John and Ruby Lomax left their vacation home on Port Aransas, Texas, they already had some idea of what they would encounter on their three-month, 6,502 mile journey through the southern United States collecting folk songs”
Travis Hoppe @metasemantic
A video is like a cube with two space dimensions and a time dimension. Timecube (code) lets you watch along a space dimension instead of along time. Turns out to be a fairly non-standard thing to do with video, and requires some tricky processing!