What Do You See?

When I've spoken about my work with nature and A.I. there's an assumption that I'm using generative models to create the images, this couldn't be furthest from the truth. At the same time I've struggled within myself to really understand or know what exactly it is I want to explore, and this is evident when I speak to people about the project, so this is an attempt at thinking through things publicly.

A yellwoing tree 
stands out amongst the green shrubbery and grass

As both a technologist and an artist, A.I. has infiltrated more aspects of my life than I'm comfortable. While I'm not as A.I.-critical as I once was, I still hold many reservations about the impact of these models both environmentally and socially, so I've come into this project with a few anti-AI biases and assumptions. The main one is that

AI cannot see

which leads to hallucinations, misunderstandings &/or miscategorisation. The problem with this assumption is that AI models are rapidly improving, while still not perfect, things have significantly improved even in the 4 months I've been thinking and tinkering with this work.

A very zoomed in 
close up of a branch with moss and leaves covering it

My project is very simple on the surface, I make some pictures of my natural environment, edit and feed some of these pictures to Apple's image algorithm through Apple photos (which is notoriously bad), and see what it spits out. Sometimes it gets the image correct and other times it's really off the mark. Really though, what I want to explore is what it means to see. Beyond correctly categorising an image, what are the processes that human and machine go through to see an image. I want to explore the distinction between how humans and machines use interpretation and inference to read an image. Through writing this, I've come to a new assumption;

all A.I. can do is "see".

Every digital image is made up of pixels and each pixel is a number, a model, through reading the numbers and reading the relationship between the numbers, can tell you what an image is and possibly even some contextual information. But it can only do what it is told to do. Can A.I. infer? Can it recall how 1 image relates to another and recount how it feels? A model's memory is simply all the data it's fed, how does that influence the machine's capability to recall and recount? What will it recall? What will it recount?

A very zoomed in 
close up of some kind of moss or plant on a branch.

There's still more to think through here, and maybe I'll write some more later. In a nutshell though, I don't use generative A.I. to create my images.