Augmented Reality, Machine Learning
About This Project

Augmented reflection experience utilizing deep dream technology to create real time psychedelic transformation of the participant’s reflection, studying self perception and the symbiosis of human and computer vision.

The Deep neural networks for sight have many stacked layers. Each layer’s input is the previous (lower) layer’s output.. The lowest layer is the input image, the highest layer has a neuron for each category of object the network was trained to see: e.g. there is a “dog” neuron that will fire if the input image contains a dog.

The lower layers have neurons that will fire if they see simpler, more physical things, e.g. a neuron that will fire if it’s looking at a horizontal edge, a different one for a vertical edge, neurons sensitive to a type of texture, etc.

The idea is to choose a target image whose style you wish to copy. We take an input image and modify it using the seeing neural network: we change the image in way that makes the lower layers become as similar as possible to the target image, while preserving the higher levels. This means the image will logically contain the same content, but the physical components of the image will be in the target style. (Doing this fast enough requires another non-trivial step – training a different neural network to do the process faster)

Principles of training:
This is not a filter, this application utilizes state of the art technologies that up to this point were considered to impossible to perform live.
Executing a feed forward neural network requires a dedicated computer equipped with cutting edge graphical processor unit, as well as a unification of several libraries and tools in order to achieve the high throughput needed.