← Back to work
04 — VR / XR · Accessibility · AI

Enabling VR
Accessibility

AI-driven 3D object descriptions — using Google Cloud Vision API and Shapes XR to make virtual reality environments more inclusive and understandable for all users.

Type
Research & Prototype Project
Role
UX Researcher & XR Designer
Focus
Accessibility · AI Integration · VR Innovation

Making VR More Inclusive

The goal was to enhance accessibility within virtual reality environments by using AI-powered image recognition to generate real-time descriptions of 3D objects — making immersive spaces more inclusive and understandable for all users, including those who rely on audio or assistive descriptions to navigate digital spaces.

Testing Four AI Engines

Before building anything, I tested four AI platforms to evaluate their ability to recognise and describe 3D objects within a VR context — each assessed on recognition accuracy, response clarity, relevance to 3D environments, and API integration capability.

01
Google Cloud Vision
High recognition accuracy, contextually relevant descriptions, strong API compatibility. Selected as the final solution.
02
Clarifai
Good general recognition but less precise for 3D object contexts. Descriptions lacked the specificity needed for immersive environments.
03
Vize.ai
Solid performance for product imagery but limited contextual relevance for virtual scenes and object placement.
04
ChatGPT
Excellent at natural language generation but required more structured input. Used later to format descriptions from Cloud Vision outputs.

Built in Shapes XR

Developed a working prototype in Shapes XR to simulate AI-assisted object description in a VR environment. The prototype had two core features:

AI Description Icon
Tap any 3D object to receive an instant AI-generated text description — contextual, clear, and delivered without interrupting the immersive experience.
Voice Playback
Listen to the object description via voice — hands-free interaction for improved accessibility. Blends naturally into the VR environment.

How It All Connected

Used PopAi to generate and format natural-language object descriptions from the Cloud Vision API output. Integrated those descriptions into Shapes XR with voice functionality — ensuring a smooth, non-intrusive experience that blends AI-driven assistance with immersive VR design.

The technical challenge was keeping the AI layer invisible to the user — the experience needed to feel like the VR environment was simply explaining itself, not like a separate tool had been bolted on.

Designing for Inclusion in Emerging Tech

This project explored the intersection of AI, XR, and accessibility. Through research, testing, and hands-on prototyping, it demonstrated how smart technology can make virtual environments more inclusive for everyone.

Key takeaway: Designing for accessibility in emerging tech isn't optional — it's essential. With the right tools and intent, we can make immersive spaces more human, one feature at a time.

Shapes XR Google Cloud Vision API PopAi Clarifai Vize.ai ChatGPT Accessibility Design AI Integration VR Prototyping
← PreviousSpotify Bookmark Feature All projects↑ Home