Reflections on Speaking at the Queensland Agtech Meetup.
Last night, I had the pleasure of speaking at the Queensland Agtech Meetup about Large Language Models (LLMs) and their role in agriculture. The event was beautifully organised by the meetup committee and held at the Greenhouse, a great small venue by the Brisbane River.
The morning after a talk is always a funny one—there's often a bit of regret about what was said and what wasn't. Ten minutes is hardly enough to cover a topic as expansive as LLMs, and I always try to pack too much in so I found myself a bit rushed at the end (though I think I stretched it to around 20 minutes). Despite that, I thoroughly enjoyed the experience. The audience's questions were diverse and thought-provoking, covering everything from bias and data quality to the potential intersections between blockchain and LLMs.
One question that stuck with me was about a good example of LLM use in Australian agriculture - there aren't that many yet with most originating from the US, and I totally forgot to mention that we’re working on a few projects with our customers! I also really enjoyed the discussion on blockchain, exploring whether it has a genuine use case in this context, and the thought-provoking questions about bias and hallucinations in AI.
I also diverged with my other panelists over the importance of prompt engineering. Prompt engineering is an important craft as demonstrated by the excellent documentation provided by Anthropic. This tutorial demonstrates how crafting detailed prompts helps massivly get the results you want. From small details like avoiding spelling mistakes (claude is more likely to make mistakes when you make mistakes) through to avoiding hallucinations by asking claude to gather evidence first. It's a good read.
As I reflect on the evening, I realise there’s still a long way to go in how we communicate these new technologies. It’s crucial to present compelling examples of their real-world applications in agriculture. Equally important is helping people understand the challenges - like the reality of prompt injection as a security risk, the fact that LLMs reward super users and require time to master, and their inherent biases and limitations. It’s on us to establish the patterns, tools, and checks for responsible AI use and to bring everyone along on this journey.
I’ve shared below a few links to the books and sites I mentioned during the talk. Once the recording is published, I’ll also post it here along with the transcript.