top of page

Intelligent Apiculture

Writer: Dave BlackDave Black

With ‘Artificial Intelligence’ playing an increasingly prominent role in our lives, we ask our resident science writer Dave Black about some of the key concepts of ‘AI’. What is it? How does it work? And where is it being used?

Artificial Intelligence’ is an easy but loose way of referring to all the tools that try to mimic the way we think.

Q. First of all, is Artificial Intelligence being used at all or is it just science fiction?

Dave Black: Yes, one of the important ways the world is changing is due to our ability to manipulate large volumes of information (so called ‘big data’) with ‘intelligent’ systems (eg. computers). In the apicultural field Artificial Intelligence (AI) is being used to process data from all kind of scientific studies. For example; recognising different honey bee subspecies, analysing honey according to type and origin, pollen analysis, counting pollen baskets to estimate food stores, monitoring entrance activity with body shape and position, social behaviour like trophallaxis, brood disease, sleep, and the demography of a complete colony . Some of these are just ‘proof of concept’ experiments, but it’s not just analysing data, for instance AI’s ‘neural networks’ are also used to control our microscopes and analyse what we see. AI will affect the future of apicultural science, but it’s most notable effect will be on the world we live in. AI is a ‘force-multiplier’ acting on systems already in use and trained with data and systems that already exist, accelerating and amplifying forces, good and bad, that already operate.

Q. Perhaps you’d better start by explaining what people mean by ‘AI’.

While we don’t yet fully appreciate the intelligent mind, or even whether intelligence is exclusively a property of the brain, developers use two basic strategies to create so-called ‘intelligent’ systems. Either they make copies of the physical structure of the brain and its network of neurons and ‘teach’ it, or they use mathematics and logic to build a symbolic analogue of what the mind does. To most of us the name ‘Artificial Intelligence’ is an easy but loose way of referring to all the tools that try to mimic the way we think.

Q. What kind of tools?

Each of these strategies has had its successes and failures, and together they have produced a range of different tools that approximate one or several aspects of ‘intelligence’. These include logical, computational, probability-based tools, search and knowledge-based tools, and brain-like artificial Neural Networks. These different tools are often used together to develop new applications, such as the fancy predictive text programmes we call ‘Large Language Models’ (Open AI’s ‘Chat GPT’ is a well-known ‘LLM’).

Q. Have you used any AI?

I have used Microsoft’s latest ‘chatbot’ search, which now incorporates AI, to look at apicultural questions. I don’t have to sign up! It did give me some useful references, and quite convincing answers. However, it included some out of date and incorrect information, and I’ve had both bad references and references that don’t exist. One time I told it the reference was wrong and it came back with the correct one. My ‘go-to’ tools are Google Scholar and Research Rabbit and I’m used to verifying information the old-fashioned way.

Q. How are these tools useful then?

I don’t think all of them are useful. Depending on your point of view this ‘artificial’ intelligence can be applied to solving technical, engineering problems (like automation) or to build models of intelligent systems as a way of understanding how they work (like science). Building AI has been compared to alchemy, in that we don’t properly understand what ‘makes’ intelligence so we are just pouring together different substances to see what happens. LLMs themselves have been likened to psychic’s con, a clairvoyant’s statistical illusion in the mind of the user,and, in a similar vein ‘stochastic parrots’ (a bird that uses language, but doesn’t actually understand it).

Sometimes the task is the important thing, sometimes we might be trying to understand the data, and it could be the strength of the model that’s the most important thing. How we evaluate the ‘usefulness’ depends of what its function is. Currently AI is quite task specific (a computer that can play chess but not draughts or Go), but creating artificial ‘General Intelligence’ (AGI), in which one type of task ‘knowledge’ can be applied arbitrarily to a completely different topic, (something human minds do all the time) is an eventual goal . A few people believe it should be possible to conceive self-aware, Artificial ‘Super Intelligence’ (ASI) that will exceed the capacity of human thought and overcome its flaws. Many disagree.

It’s also worth remembering Kaplans ‘Law of the Instrument’ which says “Give a boy a hammer and everything he meets has to be pounded”. AI tends to be promoted by its fans as the tool that will ‘fix’ everything; shopping, transportation, entertainment, construction, the environment, agriculture, social relations, health, the economy, science and technology, education, war, art. It’s not. It depends what you want to do, but hammers are useful if you know what to hit.

Q. Why are scientists interested in AI?

There are aspects of studying honey bees for which some AI tools are well suited. Neural networks are very effective at analysing digital images for instance. Observing tens of thousands of individuals over a period of time would generate a lot of data. So, currently research has to be restricted, often to a few marked individuals in an observation hive or nucleus, and that may not generalise to a hive that could house more than 50,000 bees. Artificial neural networks are now being used for a variety of studies and can beat that limitation. They offer new insight into foraging and pollination behaviour.

Artificial Intelligence has made global headlines recently with leading provider OpenAI, who owns ChatGPT, firing CEO and founder Sam Altman, before making an abrupt U-turn on the decision.

Q. Neural networks you say … what’s that?

In 1943, Warren McCulloch and Walter Pitts produced a mathematics paper titled ‘A Logical Calculus of the Ideas Immanent in Nervous Activity’. It suggested it was possible to think of the neurons in the brain as essentially just ‘logic units’, a mathematical abstraction with inputs (the dendrites) and outputs (the axons). The output value is calculated from a weighted sum of the inputs in such a way that if that sum exceeds a threshold, it functions as a ‘1’, otherwise it’s a ‘0’. We would now think of these ‘units’ (in electronics terms) as transistors, and that’s the kind of sum computers, which are fundamentally large collections of transistors, are designed to calculate with. Connecting the output of each ‘logic unit’ to the inputs of every other ‘logic unit’ creates an artificial neural ‘network’. The network will have one ‘layer’ for input values, more layers of hidden units that solve maths, and a ‘layer’ for output units, all interconnected.

To ‘train’ the network the output values are compared to the required values and if they don’t match the values propagated by the hidden units are reweighted until they do, it’s trial and error. An important feature of these networks is that there are so many possibilities it’s extremely difficult to work out what process the hidden units undertake to produce the ‘correct’ sum. We compare the ‘in’ with the ‘out’ and do not (cannot) trace the computation. The ‘real-world’ consequence of this is that conducting some kind of audit to work out why you get a particular output value (whether the machine has a fault or bias) isn’t possible. Neural networks don’t ‘recognise’, ‘know’, or ‘learn’, they parse rules to analyse patterns of probability with binary numbers.

Q. I see. That all sounds a bit technical…

It's perhaps because of the subject matter, but it’s unfortunate (and ironic) that the field is so full of loaded, weasel words. I’m trying not to be drawn into using words like ‘learning’, ‘understand’ or ‘reading’, that don’t really resemble the human activity with the same name. I think the anthropomorphism or ambiguity makes it very difficult to understand (or explain) what these systems really do. We just make this problem worse by using ‘friendly’, human-like ‘chat’ interfaces with the system; it’s nice but it hides its true nature of our interaction.

Q. Well, we’ve tackled a fair bit there and you’ve laid out a blueprint for what AI is, how about we reconvene next month to get a more specific about AI to beekeeping?

I can’t wait, there’s plenty to lay out there. I might just talk about the cost of AI too – those are some big numbers, in dollars and cents, as well as the cost to the environment…

Dave Black is a commercial-beekeeper-turned-hobbyist, now working in the kiwifruit industry. He is a regular science writer providing commentary on “what the books don't tell you”, via his Substack Beyond Bee Books, to which you can subscribe here.



 
 

Comments


Commenting has been turned off.
bottom of page