Liz Merkhofer - @elizmerkhofer
Underdetermined neural network hack (involving puppies of instragram)
Automatic captioning, done with an “inception net”, a type of neural network. One free implementation is Google’s “Show and Tell”.
Apparently instagram’s policy doesn’t allow people to use computer vision technology on posts without special perission. Liz went to twitter instead to get images to train a neural network.
She used TensorFlow to implement a “simple” (as she described it) neural network, seemed complicated to me! And, her neural network works, it can automatically caption images of her dog.
Keren Tseytlin
Creating Amazon Alexa Skill!
https://github.com/ktseytlin/AlexaComplimentMe
Alexa takes spoken words -> JSON -> a function you write -> JSON -> spoken words, by Alexa. Anyone can make an Alexa skill, it costs pennies a month.
Keren wrote an Alexa skill which will locate breweries for you near to a specified zip code. Her code works by sending a request to http://www.brewerydb.com/ for the center of the zip code. She parses the JSON that brewerydb sends back, and Alexa speaks the results. She demo’d this for us for the zip code 90210, and Alexa told us the three closest breweries to that zip code.
She also wrote an Alexa skill that compliments you, you can check this out at her Github. Chris Nguyen took some time before his hack to demo this skill, to everyone’s appreciation.
Github: github.com/ktseytlin
Chris Nguyen - @uncompiled
Alexa Fruit Stand responds to your voice and tells you the best way to store fruits and vegetables
Chris created an Alexa skill that tells you how to store fruits and vegetables. He demo’d this for everyone by asking Alexa how to store cucumbers.
Apparently, Alexa doesn’t know how to store pianos, but at any rate she was very polite about telling us.
For a database of how to store fruits/vegetables, he grabbed data from a University of California agriculture website.
Github: github.com/uncompiled/alexa-fruit-stand
Eric P
Using the University of Washington’s OpenIE project to parse political tweets
Eric created a “Twitter Fill-in-the-Blank app” which blanks out political candidate names in political tweets and then asks a user to try to guess who/what goes in the blank.
OpenIE is a University of Washington tool that can parse a sentence for subject, verb, object. CoreNLP is another tool by Stanford, which has lighter computing requirements. The tweet is passed to the NLP library and the subject is stripped. Three random subjects are added for the user to choose from.
OpenIE/CoreNLP use older NLP technology. In the future, he’d consider using neural networks to try to do something similar.
Charlie Greenbacker
I’ll show off a demo of a tool I built to automatically identify place names mentioned in text, intelligently disambiguate those names, and produce GeoJSON for the places found in the text.
Charlie created a web-app that takes a sentence which refers to several different locations which could be ambiguous on their own (such as Springfield, Athens, etc.), and correlates them to find the most likely place that the sentence is talking about.
He’s giving out free API keys, check it out for yourself!
Kwan-Yuet (Stephen) Ho
Short Text Categorization with Word Embeddings
Stephen uses Word2Vec to embed words in a short statement, and then uses one of two algorithms to label the statement as related to Mathematics, Physics, or Theology.
One algorithm he tried is to take the vectors for all the words in the statement and average them. The other uses a convolutional neural network on the vectors to attempt to classify the statement.
jessica garson
A twitter bot that combines Earth Crisis lyrics with Clueless quotes Can be followed at earthclueless
Earth Crisis lyrics aren’t always the most suitable for twitter, so Jess was hesitant to turn on the bot, but she wanted to show it for hack and tell so she decided take the risk
Github: https://github.com/JessicaGarson/earthcluelessbot/tree/master
Travis Hoppe
Penetration testing passwords with a recurrent neural network
Travis was able to get his hands on the hacked 2012 linkedin password/email database. Unfortunately (or fortnuately?) the passwords were hashed, so the passwords look like a jumble of text and letters.
Travis trained a recurrent neural network to crack these passwords and figure out what the original passwords are.
His network was trained using tensor flow, and it took only 8 lines of code!
Want a secure password? Use the curly brace. Don’t use 1234, your social security number, or your phone number, apparently that’s pretty common.
https://github.com/thoppe/5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8