Seene Technology

At Seene, we develop computer vision technology that pushes mobile devices to their limits. Our proprietary portfolio includes the following mobile device-optimized elements:

3D Face Capture

Seene has created a pipeline for high-resolution 3D face scanning on standard mobile devices. All processing is performed on the device without specialized hardware, so anyone can scan their face and see the result in seconds. The raw scan data is then fitted to a canonical base mesh and uploaded to our cloud platform, ready to be applied into virtual experiences and products.

The system is patent-pending and is available within the Seene app on iOS.

3D Scene Reconstruction

Seene’s next generation of technology enables dense 3D scene reconstruction, providing full 3D geometry and texturing that is comparable to dedicated hardware scanners – all on standard mobile devices in real-time, without the need for processing in the cloud.

2.5D Scene Reconstruction and Computational Photography

The Seene app is powered by our dense 2.5D reconstruction technology, which runs efficiently on mobile devices using CPU and GPU processing. It uses a continuous video stream to process fast, high-quality reconstructions, the results of which are available immediately.

Seene offers computational photography abilities including refocusing of photos after they have been taken and depth adjustment. Other real-time effects under development can be seen here:

SLAM and Object Tracking

Simultaneous Localization and Mapping (SLAM) tracks the camera’s position within previously unknown 3D environments. Our implementation as used within the Seene app is robust and extremely fast to initialize.

SLAM provides the ability to track arbitrary 3D objects in real-time.

Augmented Reality

Seene’s augmented reality technology detects the natural features of images and objects within a video feed in realtime and is optimized for mobile devices.

Tracking is supported for 3D primitives, such as cylinders, as well as for planar surfaces.

Google Cardboard SDK for iOS

We’ve ported the Google’s Android CardboardSDK to iOS in collaboration with Peter Tribe. Unlike DiveSDK it’s fully open-source and supports lens distortion correction along with a stereo overlay view, which allows rendering of any native Cocoa view in a full 3D OpenGL environment with distortion correction applied.

Check out the GitHub repo which has a demo OpenGL project ready to run in Xcode.