Some notes on the OpenAI Sydney Hackathon
Categories: events ai codex
I went to the OpenAI Codex hackathon in Sydney last week. I haven’t been to a hackathon for a long time so I was a little nervous heading down to the city. The event took place at the UTS Startup hub in Broadway and was attended by 100 or so developers from all kinds of backgrounds - people I spoke to had come from game development, health and education but there were marketers there as well as systems infrastructure people, students and all sorts.
The theme of the day wasn’t realised until the morning and turned out to be quite broad. Build something you couldn’t have built before Codex and we were given a months subscription to ChatGPT pro and $100 in api credits for our trouble. It was also supposed to be a greenfield project or something specifically forked on GitHub but in hindsight I think a few people bent the rules a bit.
We’ve just had a lot of back burning happening around our place following a pretty severe bush fire at the end of last year. I’d been thinking about how AI could create tools quickly for first responders and civic organisations trying to deal with the aftermath of disaster. Still pretty broad but I thought I’d have a go at building a prompt and tooling that would enable codex to build a communication hub for these kind of disaster situations quickly and tailored to the event, location and participants at hand. For example this could be spinning up a site to help with coordination of resources, helping people access food or dust relief, connecting people separated during disaster etc. All things that before codex are pretty challenging for civic organisations and charities etc.
I’m really intrigued by the idea of Ghost apps and Dark Factories in software. A ghost app is essentially an app built from a prompt and a dark factory is similar to dark factory manufacturing where factories are lights out, fully automated and working with no or little human intervention. Cal’s this be leveraged for deploying software quickly in challenging situations? Maybe.
The hackathon was my first time building with effectively unlimited tokens. I also tried to bring in OpenAIs new privacy model and also the Codex agent-server. I thought I could use the Privacy Model for removing or highlighting any PIi data being submitted or accessed and the agent server for making in site changes to further optimise the tools for their audience and environment.
I didn’t create the app prompt in one shot. Instead I worked iteratively with codex to shape the app. In th background, I had added an instruction to refine prompt.md, incidents.md and learnings.md files on each iteration. The intent was that by th time I had got to a working application those three files would be populated and optimised based on the iterations. I could then provide a new codex project with those files and it would use them to create the app (including any local context or constraints).
The approach worked pretty well. With a few caveats. 1. Codex despite what OpenAI says is pretty ordinary at design. I could probably iterate here to improve the output but even using the Image Gen 2 model to imagine the UI I was pretty underwhelmed with the look of the final site. A component library, better css guidelines etc would probably work wonders.
-
Codex would often say it had done things when it hadn’t and was inconsistent between runs. In one run for example it would integrate with the codex app server. In the next run it would say it had integrated then later mention in passing it was a kicked implementation that used app server but hadn’t really implemented it into the tool.
-
It’s really hard to be specific when describing what you want. I’ve realised the amount of subtle back channel information that gets conveyed between people when designing software that’s missing during agentic development. That might be shared context or understanding, social norms, local practices, societal expectations etc. it’s really hard to get all of this down in text. No matter how much you think about it.
-
I’d also say that English is a terrible programming language. I had to stop half way through the day and chuckle to myself when I realised that we’ve effectively replaced some pretty good programming languages with an English language overlay. I’m not sure this is going to continue for long. It feels more than ever like a stop gap.
As an MVP I think I got to a place where I was fairly happy with the results. If need to do the math on the tokens used etc. I didn’t really get to something production ready but a few more days of work would have got me there I think. Annoyingly I don't think my entry was judged, the event wifi was awful all day and I struggled to get my video demo to upload. I thought I’d managed it as it told me it was processing but when I checked after the event deadline I saw it had an error.
So no I didn’t win, but I didn’t expect to. I saw a few of the hacks from the day. They included things like an auto generated student text book, a codex driven NPC in Pokémon (this was cool and I think won overall) etc.
One big thing that struck me is about how hackathons have changed. I used to go to hackathons with my sleeping bag and a few of us used to hack, eat pizza, drink far too much coke and sleep under desks until we had something we could demo. Now in the age of content and coding models the hackathon is live streamed to the web and can go from nothing to something in a few hours. The demos are staged for a live internet audience and so is the commentary etc. I’m getting older (I was pretty much the oldest in the room) but I found this a bit uncomfortable. It had a bit of a Running Man feel to it. It’s crazy to think how we’re sleep walking to that content format.
I’m not sure if I’ll progress my hack further but as a thought exercise it was really useful with a lot I can take back to my daily work.
So a big thank you to the OpenAI team for putting on the day and hopefully I’ll get to do something like it again soon.