Working Inside The Apparatus

Three Popes And The End Of Indexicality

SPLICER / DEVELOPMENT, Spring 2021
C-print, 40 x 50 cm

My access to the photographic image economy is very organic: since around 2011 I have worked in and around photography. As a photographer and as an assistant, in commercial or artistic productions or in institutional settings. During this time, I also studied photography and worked in an art school, supporting teaching, exhibition, and research efforts on photography. The insights gained through these experiences form the starting point for my applied critique of photography and visual culture.

The initial motivation for working in photography was to better understand my surroundings. However, in a society saturated with images, perception is increasingly shaped through photographic devices rather than through direct sensory experience. These devices and imaging processes – though often unnoticed – significantly impact how I perceive the world. This realization led me to focus my work on the filter of perception: the photographic apparatus.

In 2017, I initiated the project Aporetic Spectacle to investigate the implications of the rise of computational photography, particularly the shift from the traditional "dumb" camera to the smartphone. In Aporetic Spectacle, I exposed the landscape to a smartphone camera and analyzed its visual output to gain insight into the algorithmic mechanisms and programmatic biases introduced to the image-making process.

I began to observe how the photographic process had fundamentally changed. For much of its history, lens-based photography followed a relatively stable structure – from its analogue beginnings through early digital practice:

light  
    → lens  
        → photosensitive surface  
            → latent image  
                → development  
                    → photograph  

This linear chain grounded the photographic image in a direct, optical, and material relationship with the world.

With the rise of computational photography, the photographic process has undergone a profound transformation:

light  
    → lens  
        → photosensitive surface  
            → data set  
                → algorithmic analysis and reconstruction  
                    → photographic image  

This shift redefines the veracity of the material produced by a camera. While computational photography may feel like an upgrade – offering significant improvements in usability, automation, and accessibility (the phone becomes a camera; it detects snow, sunsets, and faces; it is fast, portable, and always connected) – it also functions as a perception filter applied to the physical world. Features such as depth blur, automated lighting effects, and facial filters provide creative convenience, but they simultaneously reshape how we perceive, interpret, and interact with reality on a foundational level.

Before the emergence of AI-generated imagery, the computer-generated image (CGI) was often considered the new photographic image: visually convincing yet entirely disconnected from the physical world, and thus incapable of conveying knowledge about it. Paradoxically, however, CGI is often more similar to traditional lens-based photography in its process than to computational photography. In unbiased CGI rendering engines, the physical and optical properties and behaviours of light are simulated in software. Specular and diffuse reflection, transparency, refraction, transmission, subsurface scattering, ambient occlusion, surface roughness, caustics, light absorption, emission, anisotropy, and so on, are all ways in which light interacts with materials and surfaces. These interactions occur in the real world and their logic is programmed into CGI engines, where the paths of light rays are simulated by software to mimic physical light. Therefore, CGI simulates the physical behavior of light and lenses, while computational photography reconstructs images through algorithmic interpretation rather than a simulation of optical representation.

Following the Aporetic Spectacle project, it became clear that a deeper engagement with the act of photographing was necessary. Merely probing the black box and interpreting its outputs was no longer sufficient. I needed to open the photographic apparatus itself – to disassemble its components, examine the technologies underpinning them, and reconfigure them into a new kind of camera. One that could operate simultaneously as a tool and as a subject of inquiry.

Splicer is that camera.