Merntix is an immersive Student Metaverse Room that transforms the way students learn and collaborate. Built with the MERN stack, Three.js, and Claude AI, it offers a 3D virtual space where up to 3 st
3D virtual classroom environment
First-person and third-person camera views
YouTube video integration for educational content
Character customization
Day/night cycle
RESTful API for user management and room creation
Technologies Used
Frontend: React, TypeScript, Tailwind CSS, Vite
Backend: Node.js, Express, TypeScript
Metaverse: React, Three.js, React Three Fiber ,WebSocket
What Problems Does It Solve
Immersive Learning Environment: Transforms traditional online education into an engaging 3D experience, increasing student attention and participation
Experiential Education: Allows students to experience the future of education through virtual classroom interactions
Interactive Assessment: Provides a dynamic environment for mock tests and assessments that surpass traditional online quiz formats
Distance Learning Barriers: Breaks down geographical barriers while maintaining a sense of physical presence and community
Student Engagement: Combats online learning fatigue through gamified educational experiences
🚧 Challenges We Ran Into
First Time with Three.js & React Integration
This was our first time exploring Three.js. We started by watching a few YouTube tutorials and then moved on to the official documentation. While we were able to grasp basic concepts, integrating Three.js into a React environment using @react-three/fiber
turned out to be much more complex than expected. Managing components, reactivity, and scene updates required a lot of trial and error.
Finding the Right 3D Character & Classroom Model
Choosing suitable 3D assets was another challenge. We initially used Mixamo for character animations and classroom models, but finding a balance between quality and compatibility took time and effort.
Using Blender for the First Time
Blender was completely new to us. Importing, editing, and exporting models correctly for web use proved to be a steep learning curve. Simple tasks like adjusting poses or fixing animations became time-consuming.
Implementing WebSocket Rooms
While we were already familiar with the basic concepts of WebSockets, using them in combination with a 3D environment was a first. Creating synchronized virtual rooms that multiple users could join and interact in required careful handling of socket events and scene updates.
MCQ Generation using Anthropic AI
In the second phase of the project, we attempted to generate MCQs using Anthropic’s AI API. Integrating the API, formatting questions correctly, and handling edge cases was more difficult than anticipated.
Exporting Results to PDF
Downloading user results as a PDF seemed like a minor feature at first, but involved a fair bit of complexity—especially in terms of layout design, formatting, and consistent rendering across browsers.
There were several moments where we felt stuck and even considered giving up. But after pushing through all the hurdles, we’re proud to present the final product you see today.