VIRTUAL BOUNDARIES
Virtual Boundaries surveys our relationship with technology; processed Internet footage recreates the individually tailored bubble we find ourselves in online due to the nature of algorithms. Layered video feedback brings focus on the cyclical patchwork of this virtual space, in turn presenting an opportunity for the viewer to cognitively map the chaos in turn unearthing insights and questions into how it is we interact with a vast network of algorithmic labour.
7 SCREEN VIDEO INSTALLATION
MEDIA A | PART OF VIRTUAL BOUNDARIES
MAPS A | PART OF VIRTUAL BOUNDARIES
MEDIA B | PART OF VIRTUAL BOUNDARIES
MAPS B | PART OF VIRTUAL BOUNDARIES
AROUND | PART OF VIRTUAL BOUNDARIES
GLOBAL | PART OF VIRTUAL BOUNDARIES
STREAM | PART OF VIRTUAL BOUNDARIES
Virtual Boundaries
I created a seven screen, seven monitor video installation with each screen playing its own three-minute video created using a variety of software and hardware tools curated by an aesthetic that was influenced by themes such as social media, algorithms, glitch art and new aesthetics. When making the videos I found it most effective visually by thinking of them as collages, heavily relying on different processed layers and montages of content I had taken from a variety of sources. These sources included (as stated in my proposal) social media, YouTube and Google Maps I also included content from Google Earth, Nasa’s Eyes climate change software, live streams, video feedback and recordings of myself navigating my local area. Most of these sources were screen captured using QuickTime’s screen capture function which due to the nature of my computer screen gave me a resolution of 1280 x 800. I decided to keep this resolution throughout my videos as I felt it suited the theme of the work. Most of the videos followed a rough formulaic approach; usually consisting of stockpiles of screen-captured content or video feedback that would then be processed using a Max patch that I had refined from the different functionalities available in jitter.
Link to patches and sketches (http://bit.ly/2pPmidv)
![]() | ![]() | ![]() |
---|---|---|
![]() | ![]() | ![]() |
![]() | ![]() | ![]() |
![]() |
I would then screen capture the processed results and add them back into the stockpile treating the content similar to the over processing of audio in which the image would be adapted and ran through the same processes repeatedly to expose the artefacts and glitches. I would then select a few of these videos and experiment using FFMPEG in incorrect ways, usually converting to a RAW data type and then back into different movie codecs in order to produce certain glitch effects, due to the nature of the files that were produced it was necessary to open them in VLC player and then screen capture the results allowing the video to be played in other software. (http://bit.ly/2qBhUMj) In addition to this standard formula I would also process certain source videos in a Processing sketch I had found on a forum that pixel sorted two RAW data types together, (http://bit.ly/2qxKBvO) I also experimented with using audacity to process RAW data as sound (http://bit.ly/2qWRZ4Y). Taking this stockpile of unprocessed and processed content I would then create a three-minute base collage in Premiere Pro that would be placed back into the Max, and then blended with processed video feedback, the video feedback would be processed in a similar way to the source content producing beautiful patterns and colours as can be seen below.
![]() | ![]() | ![]() |
---|---|---|
![]() | ![]() | ![]() |
![]() | ![]() | ![]() |
![]() | ![]() | ![]() |
![]() | ![]() | ![]() |
![]() | ![]() |