Crestron Masters Hackathon 2021

Jeremy Weatherford
5 min readApr 30, 2021

Or, Next Time Read ALL The Instructions

It’s time for the annual after-action report from the Hackathon. This was our second virtual hackathon (Thanks Covid 😥) and was set up similarly to last year: no teams, everyone is interacting with a single webserver using whatever tools they prefer.

Last year I got myself in trouble by trying to solve the maze problem manually instead of taking the time to implement a solving algorithm, so this year I was psyching myself up to automate everything. I had my node.js environment ready to go with my Hackathon credentials loaded, and just needed the IP of the server and the URL endpoint to hit.

When the gates opened and I got the two-page instruction sheet, I immediately jumped into running GET requests on the server, logging the first request of the hackathon. The goal was to retrieve a question from the server in one of four categories, solve it, and answer back within 5 seconds (so you couldn’t do it entirely manually). The categories were Math, Word Count, Alpha Sort, and Decryption. The first problem I got back was Math, so I wrote a quick function to evaluate it using Javascript’s eval() function and got an Incorrect response. I checked it by hand and everything looked fine, so I went into troubleshooting mode to make sure my program was doing what I thought it was.

While troubleshooting, I got some Word Count questions, so I started implementing those. After fixing a dumb mistake (forgot to ignore punctuation), I was sure I had the right answer but was still getting Incorrect responses from the server. I read through the instructions again and realized I was supposed to submit a text/plain response, not JSON — oops. Unfortunately, it took me quite a few tries to figure out how to get request.js to send a plaintext body. For some reason my Google-fu failed me and I kept getting answers about parsing plain text in express instead of the other way around, then I would get errors from request about how it couldn’t parse the return message (even though the response was in JSON?). Eventually I settled on the magic incantation request({method: ‘PUT’, url: url, body: answer, json: false, headers:{‘Content-Type’: ‘text/plain’}), but it took me way too long to get there. More familiarity with my tools would have helped here.

I wasn’t too worried about all the Incorrect answers though, as I worked my way through the question categories one at a time. I noticed I was getting all the remaining Math problems wrong, and that they all used exponent notation using the ^ sign. A quick REPL confirmed that Javascript does NOT use that for exponential notation, so I plugged the remaining math questions into Wolfram Alpha and hard-coded the responses. I had to remove the spaces in the questions since Chris Tatton had evilly randomized them, but then started getting those correct as well.

Somewhere around this point I realized I was getting different numbers of points for each correct response, and decided to read the rest of the instructions 🙄. Turns out I should have been a lot more worried about those Incorrect answers — you can only get full credit for a response on the first try: 100 points, then 75, 50, 25, down to a measly 1 point. After all my failed experiments, most of the questions I was answering correctly were now worth one point. 😐

I did implement Decrypt in software fairly efficiently, and at that point answered the rest of the questions correctly… except for the notorious Buffalo sort. The Buffalo sort was the last question for everyone to answer correctly for a very good reason: Chris Tatton had programmed the wrong answer on his end. I called him out on it, he double-checked, and admitted that it was wrong, although he mentioned that “it was a very easy mistake to make” or something like that. Since I didn’t have anything else to do, I started thinking about what that error might be. Capitalization? I had already tried that both ways.

Flashback to me in college, participating in the ACM’s ICPC college programming challenge. This was a grueling challenge with 8 problems given during the 3 hour contest, and my team was usually happy if we got 3–4 of them correct. So the year that I got a problem answer correct EXCEPT for a trailing space that I erroneously included, I was absolutely furious with myself. In fact, the next year I wrote a little script to double-check all of my program output and print a GIANT BANNER to the terminal if a trailing space was detected.

Guess what Chris screwed up? I got it on the second try — there was a trailing space on that one answer. *BING*, all of a sudden my name on the leaderboard, although bumped down to around 15th place thanks to all my wrong answers, was the only one with a finish-line flag next to it. I was the first one to answer all 20 answers correctly.

I didn’t have anything else to do at this point, so when I got an anonymous Teams message (I didn’t even know you could do that!) with an offer of someone else’s credentials, I jumped at the chance to run my script again and see how many points I could get without all of the Incorrect penalties — 1900 points, almost a perfect score (my code had special-case logic for the Wolfram Alpha math problems that didn’t handle the easy ones, and special-case logic for the Alpha Sort “wrong” answer that got all the other ones wrong)

So what could I have done better? Well, reading the instructions all the way through would have helped — the information about the point values was at the bottom of the instructions, and I would have realized that I needed to avoid wrong answers if at all possible. I could have written my script to print answers to the console and not send them until I was happy with them, or even just to ignore question categories I hadn’t implemented yet instead of sending empty incorrect answers. Even once I realized I was being penalized for incorrect answers, for some reason I thought that allowing the 5 seconds to expire without an answer was the same as getting it incorrect, when in fact you could do that indefinitely with no penalty.

I probably would have still gotten some Incorrects while I was troubleshooting my JSON/plaintext body issues, but I’m pretty sure I would have scored high enough to get first place if I had manually screened the answers before letting them go through. Ah well… I still had fun, which was the only important metric here.

Congrats to the winner Casey Roe, my teammate from the last two in-person Hackathons. And thanks to Chris Tatton and Toine for setting up and running the Hackathon, and Rich for approving the ridiculous 96-core AWS server to run it on.

--

--