Operational Principles
Much like audio samplers – widely used in electronic music to record, cut, stretch, loop, and filter sound bites to produce entirely new compositions – Splicer applies a similar logic to visual material. A physical object, or sample, is placed in front of Splicer's camera module, where it becomes the raw input for an image-generation process rooted in multi-dimensional visual sampling.
Unlike conventional cameras, Splicer does not rely on a rectangular area sensor. Instead, it employs a line scan sensor composed of a single row of pixels. This type of sensor is commonly used in industrial applications for quality control of continuous materials such as paper, textiles, or metal on conveyor systems. The same technology is also found in photo finish cameras at sporting events, where it enables high-precision, time-resolved imaging of competitors crossing the finish line.
The principle of line-based image capture has historical roots in analog photography, most notably in slit-scan techniques used for producing panoramic images. Companies such as the Swiss manufacturer Seitz have a long-standing tradition of developing rotational panoramic cameras based on this method. A more abstract and cinematic application of slit-scan imaging appears in the iconic Stargate sequence of 2001: A Space Odyssey (1968), directed by Stanley Kubrick, where it was used to evoke temporal and spatial distortion.
To intuitively grasp Splicer’s operational logic, one might compare it to placing a hand on a photocopier. As the scanning light bar moves beneath the glass, shifting the hand during the scan produces distorted or warped images. Splicer operates on a similar principle, but with significantly greater complexity, dimensionality, and precision of control.
In Splicer, the scanning line remains fixed. It is the object – rather than the camera – that moves, not merely across a flat plane, but in all three spatial dimensions, with additional rotational movement enabled around two axes. Furthermore, Splicer incorporates optical perspective controls, including horizontal and vertical shift, as well as focus adjustment mechanisms. These parameters allow for fine-grained manipulation of how the object is spatially translated into visual data. Crucially, both the motion speed and the sensor’s triggering frequency can be dynamically varied during acquisition, enabling time itself to be captured at multiple resolutions within a single image.
Contrary to Vilém Flusser’s account of the photographic gesture – where the photographer moves with the camera around the subject – Splicer reconfigures this relationship entirely: the apparatus remains static, while the subject is set in motion. This reversal echoes the broader condition in which the photographic image becomes reality, and the lived world performs for the camera.
Each image produced by Splicer emerges from this shift. Time, space, and perspective are dynamically inscribed through movement, generating a non-linear visual record. These images function as extended gestures – composed not by framing a moment, but by orchestrating spatial and temporal displacement.