Vis in the Wild
By: Ikran Warsame & Fareena Khan
What is the purpose of this visualization?
The visualization is hosted on the Georgia O'Keeffe Museum website. The purpose of this visualization is to showcase all the artwork Georgia O’Keeffe made or contributed to over the course of her lifetime.
What is the data? How was the data captured or collected?
The main visualization is a stacked bar chart. The data is the artworks and its information like year made, and more. Since the museum features her artworks, which is their main theme and collection, the museum is able to provide the data.
Overview:
Who are the users that this visualization was made for? Experts, general public, children, patients, mechanics, athletes+trainers...?
The users were made for everyone as the main goal is to inform the public about Georgia O’Keeffe life and artworks. Because the website is essentially an expert matter page on the world-renowned artist, this could be helpful for researchers.
Questions+Insights: What questions can people ask+answer about this data using this visualization? How can they find the answers with this tool?Show some example insights someone can arrive at using this tool
People can ask about what certain periods and years did Georgia O’Keeffe not produce any artwork, what year did she produce or not produce a certain type of artwork style like oil painting, and what her earlier years of life was like. They can find the answers to those kinds of questions, by hovering over the bars and finding the artworks information through the tooltips, they can filter out the chart for certain types of art style work, and they can filter the years as well. For instance, if someone wanted to see what artwork she made, and how many, at 18/19, they can filter for the year she was 18/19 and these results will pop-up: And if they wanted to filter for just drawings, they will see this:
Comment on the visual and interaction design choices- are their choices effective? Are there any design choices that are not effective, and how could they be improved?
The visualization is good at showing a large amount of O’Keeffe’s work in one place and helping users explore patterns over time, which is its main purpose. The filters and timeline make it interactive, but the design is not very smooth. The checkbox filters are tedious to use, especially for multiple selections, and the interface feels overwhelming at first with little guidance. Overall, it works for exploration, but could be simpler and more intuitive.
What are the limitations of this design- what can't someone do with this visualization?
A key limitation is that you cannot easily compare things like different time periods or categories side by side, so deeper analysis is harder. It also does not provide explanations or storytelling, so users have to interpret everything themselves. The layout can be confusing and the filters need improvement, which has been noted in usability testing . So it is useful for browsing, but weaker for comparison, learning, and ease of use
Virtual Augmented & Mixed Reality in the World
By: Elshaddai Melaku & Fareena Khan
Cincinnati Children’s Hospital’s Virtual Reality Surgical Simulation (VR3S)
Image/video credit: Cincinnati Children’s Hospital + Unity Technologies
We chose Cincinnati Children’s Hospital’s Virtual Reality Surgical Simulation (VR3S). It is a real world use of VR with real impact. Instead of using VR for gaming or entertainment, this system is being used to help surgeons plan surgeries for babies born with congenital heart defects. It combines medicine, 3D modeling, multiplayer VR, and global collaboration into one system. It basically creates a digital twin of a patient’s heart, and surgeons can step inside it in VR before the real surgery even happens. While surgeons are the main users it can also help patients, parents, trainees, global medical teams, and medical students.
The project even won Unity’s 2024 Unity for Humanity Grant.
Using VR3S, surgeons can take CT or MRI scans and turn them into a 3D digital twin of the patient’s heart, which they can then explore in virtual reality. Instead of only looking at flat 2D scan slices, they are able to rotate the heart, zoom in on different structures, and even step inside the anatomy in VR to better understand the exact shape of the defect.
One of the most advanced features is the ability to place virtual valves, baffles, and other medical devices directly into the heart model so surgeons can test different surgical approaches before the real procedure. The system also supports multiuser collaboration, which means surgeons in different countries can join the same virtual space and work together on a case in real time. On top of that, the platform includes real-time multilingual translation, which makes global collaboration much easier and supports the project’s larger goal of improving heart surgery outcomes worldwide.
The design and implementation choices for this project strongly support its goal of improving surgical planning and collaboration. Using Unity is an effective choice because it supports real-time 3D rendering, multiplayer networking, and immersive interaction, all of which are essential for a VR-based medical application. Important actions such as launching a VR session, sharing a case, editing sessions, and uploading models are clearly labeled and easy to locate. The case creation screen is especially strong from a UX standpoint because it organizes the workflow into clear sections, such as preview, 3D model files, and save actions, making the process intuitive for users navigating complex medical data. These design choices are effective because surgeons need an interface that is fast to learn and easy to use in high-stakes environments. The real-time rendering of patient anatomy, combined with a structured and user-friendly interface, allows medical teams to review cases and plan surgeries efficiently.
Image Credit: https://www.andydilallo.com/vr3s/
The project uses a Unity based VR environment to create an immersive experience accessed through VR headset. They leverage cross-sectional imaging by inputting data from CT or MRI scans to create 3D renders from them. This allows for surgeons to experience patient anatomy in a 3D space, which can help improve their ability to understand how to move forward more efficiently. In addition, the platform supports multiplayer networking and Azure service integration, which enables real-time collaboration between medical teams across different locations and supports secure cloud-based data management. The main downsides to using tech like this is the cost to access VR hardware, install compatible software, train personnel, and ensure patient data security. These barriers make the tech less accessible to smaller hospitals and hospitals with less resources, which conflicts with one of the goals of the project being trying to make the system available globally.
Realism is definitely a major goal to a project like this, as it must prioritize medical accuracy and spatial precision. The digital 3D render must reflect the exact patient anatomy since the goal is to leverage the render to plan surgical procedures for the patient. One Project Lead shared that the platform successfully achieved this, and that surgeons are able to explore the heart of a patient at every angle and interact with it real time.
Users use VR headsets to view and controllers to navigate, and they can collaborate with others. They can manipulate the 3D heart model by rotating, moving it around, zooming, viewing internal structures, and placing surgical plans onto the anatomy. These actions are effective in achieving the goal of viewing patient anatomy and planning surgery ahead of time. The collaborative component is also very effective in that it allows experts from different hospitals and locations to work together on the same patient case.
Right now, VR3S is only really useful in hospitals that already have access to high-quality CT or MRI imaging, since the entire experience depends on creating an accurate 3D digital twin of the patient’s heart. It also depends on hospitals having VR headsets, reliable internet for global collaboration, and staff who are trained to use the system effectively. Another limitation is that the platform is still focused mostly on congenital heart surgery, so its use cases are narrower than other medical simulation tools.
We think the next best step for the developers would be to make lower-cost versions so hospitals around the world can access the same technology. It would also be helpful to show and improve real-time blood flow and tissue simulation, since that would make the planning even more realistic. It would also be really useful to expand the system into other types of surgeries, improve haptic feedback, and create easier training modes for medical students and residents.