top of page

VIRTUAL BOUNDARIES

Virtual Boundaries surveys our relationship with technology; processed Internet footage recreates the individually tailored bubble we find ourselves in online due to the nature of algorithms. Layered video feedback brings focus on the cyclical patchwork of this virtual space, in turn presenting an opportunity for the viewer to cognitively map the chaos in turn unearthing insights and questions into how it is we interact with a vast network of algorithmic labour.

7 SCREEN VIDEO INSTALLATION 

MEDIA A | PART OF VIRTUAL BOUNDARIES

MAPS A | PART OF VIRTUAL BOUNDARIES

MEDIA B | PART OF VIRTUAL BOUNDARIES

MAPS B | PART OF VIRTUAL BOUNDARIES

AROUND | PART OF VIRTUAL BOUNDARIES

GLOBAL | PART OF VIRTUAL BOUNDARIES

STREAM | PART OF VIRTUAL BOUNDARIES

Virtual Boundaries

 I created a seven screen, seven monitor video installation with each screen playing its own three-minute video created using a variety of software and hardware tools curated by an aesthetic that was influenced by themes such as social media, algorithms, glitch art and new aesthetics. When making the videos I found it most effective visually by thinking of them as collages, heavily relying on different processed layers and montages of content I had taken from a variety of sources. These sources included (as stated in my proposal) social media, YouTube and Google Maps I also included content from Google Earth, Nasa’s Eyes climate change software, live streams, video feedback and recordings of myself navigating my local area. Most of these sources were screen captured using QuickTime’s screen capture function which due to the nature of my computer screen gave me a resolution of 1280 x 800. I decided to keep this resolution throughout my videos as I felt it suited the theme of the work. Most of the videos followed a rough formulaic approach; usually consisting of stockpiles of screen-captured content or video feedback that would then be processed using a Max patch that I had refined from the different functionalities available in jitter.


Link to patches and sketches (http://bit.ly/2pPmidv)

Screenshot 2017-02-23 15.18.35.png
Screenshot 2017-03-30 15.59.04.png
Screenshot 2017-04-24 14.33.11.png
IMG_0915.JPG
Screenshot 2017-05-17 12.37.25.png
IMG_0910.JPG
Screenshot 2017-05-17 16.34.31.png
Screenshot 2017-05-17 16.36.07.png
Screenshot 2017-05-17 16.43.41.png
18601385_120332000852531923_1868419956_n.jpg

I would then screen capture the processed results and add them back into the stockpile treating the content similar to the over processing of audio in which the image would be adapted and ran through the same processes repeatedly to expose the artefacts and glitches. I would then select a few of these videos and experiment using FFMPEG in incorrect ways, usually converting to a RAW data type and then back into different movie codecs in order to produce certain glitch effects, due to the nature of the files that were produced it was necessary to open them in VLC player and then screen capture the results allowing the video to be played in other software. (http://bit.ly/2qBhUMj) In addition to this standard formula I would also process certain source videos in a Processing sketch I had found on a forum that pixel sorted two RAW data types together, (http://bit.ly/2qxKBvO) I also experimented with using audacity to process RAW data as sound (http://bit.ly/2qWRZ4Y). Taking this stockpile of unprocessed and processed content I would then create a three-minute base collage in Premiere Pro that would be placed back into the Max, and then blended with processed video feedback, the video feedback would be processed in a similar way to the source content producing beautiful patterns and colours as can be seen below.

Screenshot 2017-04-22 21.50.20.png
Screenshot 2017-04-22 21.18.05.png
Screenshot 2017-04-22 21.17.30.png
Screenshot 2017-04-22 21.16.37.png
Screenshot 2017-04-22 21.16.22.png
Screenshot 2017-04-22 21.15.52.png
Screenshot 2017-03-28 19.31.04.png
Screenshot 2017-04-06 14.26.53.png
Screenshot 2017-04-06 14.43.42.png
Screenshot 2017-04-06 14.44.54.png
Screenshot 2017-04-22 16.50.21.png
Screenshot 2017-04-22 21.15.38.png
Screenshot 2017-03-28 19.11.56.png
Screenshot 2017-03-28 19.11.16.png
Screenshot 2017-03-27 13.40.42.png
Screenshot 2017-03-27 13.40.28.png
Screenshot 2017-03-28 19.10.36.png

The blended result would be a three-minute screen captured video that I would refine before deciding to re process and place back with the source content or import into Logic X. (http://bit.ly/2pYdkGY) The video would then be synced using MIDI notes that triggered a variety of samples ranging from hardware feedback, computer static noise as well as more tonal gestures and textures created in a similar fashion to the videos. I would also perform to each video adding small final touches with improvised pedal feedback, coil microphone gestural play and synthesised sounds from a synthesiser. Each video was assigned 1/7th of a harmonic drone texture that would be further processed within its project, again using similar over-processing techniques in order to add micro detail to the overall drone, allowing each video to contribute to its collective. This drone was necessary in order to place the harsh static sounds within a context, thus granting the viewer passage over the virtual boundary.

Virtual Boundaries is a video installation that functions as an environment, therefore is part of a lineage of artists such as Nam June Paik, Pipiloti Rist and Hito Steyerl, the work also shares a bond with the work of film director Liam Young. Paik’s works such as TV Garden (http://bit.ly/2pOnUo9) and Electronic Superhighway (http://s.si.edu/2pXegey) managed to create environments using the aid of the moving image; he was a pioneer and visionary who used new media in his work. Therefore by creating a video installation based on the topic of the Internet, associations and expectations are soaked into my work in a sort of homage to his practice. Although not yet properly set into a space and given participants I believe my installation will function similarly to the work of Pipiloti Rist, consuming the viewer in vibrant video and meditative audio. An exhibition named Pixel Forest (http://bit.ly/2qR6Mht) displays some of her life’s work; her most recent pieces administering the viewer into a world that “fuses the biological with the electronic in the ecstasy of communication.” I feel my work relates to hers in the sense that I can contribute to a parallel narrative and almost elaborate on her topic by exposing the inner workings of this ecstasy of communication. The recent work of Hito Steyerl ‘How Not to Be Seen: A Fucking Didactic Educational .MOV File’ (http://bit.ly/2pOi9GG) offers a critique on mass-produced imagery using seemingly lo-fi postproduction and digital animation. (http://bit.ly/2qwg0P7) This gives the piece a visual quality I believe our work shares, this visual quality can be perceived as giving homage to Glitch art and Vaporwave aesthetics that in turn lend themselves to a greater pool of memes, internet browsing and internet culture in general that of which encapsulates Steyerl’s critique. Whilst sharing a similar thread of visual similarities my work varies from Steyerl’s in the sense that my piece has more emphasis on the layering and distortion of images, keeping them in their context yet applying distance that I hope makes the viewer question what they are viewing. Liam Young’s film ‘Where The City Can’t See’ is the first film to be shot entirely using laser scanners, the underlying plot of an almost dystopian future where production line workers are followed to unearth the sub-cultures that have formed out of this autonomous machine filled landscape. (http://bit.ly/2pXh6QK) Virtual Boundaries could be said as sitting on the cusp of this landscape instead peering in at the early inner workings of managerial algorithms as their work starts to shape physical reality and become an ever more dominating force within human society. My own work seems not to have contributed towards a scene or movement, it has however granted me insights into future potential in this field and allowed me to see where my work can fit in the world. It has opened doors and allowed me new currency to begin to trade in this narrative that is video.

The completed piece has emerged from what felt like a dense tangle of ideas and thoughts surrounding my experience and interaction with the Internet. The work to me has personified a viewpoint into a virtual landscape, a vantage point offering insight into the way we perceive and make sense of virtual space. It exposes the user interface for what it is, a complex nonsensical iceberg of algorithmic design, the user peering into the immense network of mathematical labour. Due to this gained insight I feel the project fulfilled my aims of creating a cohesive audio-visual piece that could function as an immersive installation, I had the opportunity to set up a trail run of the piece in the TV studios at Newton Park, capturing the piece functioning as an environment in order to aid my chances of getting the work shown at galleries, festivals and creative spaces in the future. In my project proposal I stated I would create a series of different displaying setups such as the media wall and a more interactive version that requires the viewer to contribute to the overall piece with smartphones. These different environments and settings for the work did not come to fruition although remain exciting possibilities for presenting the work in the future. The creation of the content developed through a number of phases as my knowledge and understanding of the medium improved, the early phases were mostly experimental allowing me to explore some of jitter’s features in Max, this experimentation allowed me to expose the “norms” of the processing techniques I was using allowing me to start to refine and select processes that yielded slightly more interesting results than the initial plays, this refinement carried on for a while as I experimented with at first mainly screen-captured social media content before incorporating camera footage and an expanded list of shader scripts that could be applied to the image, this allowed me to step closer towards developing my own voice. After this first initial phase I hit a wall were my visual work seemingly became static and lifeless, (http://bit.ly/2rjcbNY) spurring on new research and development that allowed me to expand my palette and deepen my satisfaction from these experiments. I used Facebook communities such as the ‘Glitch Artists Collective: Tool Time’ to ask questions and search for new ways to process image as well as allowing myself to incorporate post production with use of Premiere Pro which would allow me increased control over what was seen in the videos, in turn granting me the power to display a vague narrative in each video. After this phase I believe I truly found an opening into how I could curate process and develop my visions and ideas about the work into realities, finding the work becoming full of life again allowing me to unearth new combinations of image that I was happy to finalise eventually forming the foundations for the seven final videos.

bottom of page