Exhibition at California State University, Fullerton. Marilyn and Cline Duff Gallery (March 2025)

"Aidentity" - Marilyn and Cline Duff Gallery (March 2025)

Bridging the AI Perception Gap
Challenge negative perceptions of AI on campus by designing an engaging, playful interaction between students and an AI system.

The Challenge
Negative perceptions of AI among Users at California State University, Fullerton created barriers to engagement. Fear, skepticism, and misinformation limited productive exploration of AI's potential, hindering innovation within the academic environment.
My research identified specific concerns, including job displacement fears, ethical questions, and misconceptions about AI capabilities that needed addressing.

How I Approach to User
Developed in Python, Aidentity provides an interactive experience that transforms how campus communities perceive AI technologies. By enabling real-time visual analysis and creative reinterpretation of identities, users directly engage with AI in a positive, personalized context.
This hands-on approach demonstrates AI's creative potential while demystifying the technology through firsthand experience.
IMAGE DEMONSTRATION
IMAGE CAPTURE
IMAGE ANALYSIS AND GENERATION
Design Process
CONTEXT
At CSU Fullerton, many students view AI with fear, skepticism, or indifference despite rapid advancements.

METHOD
A pre-research with 72 students and faculty gathered honest opinions on AI's impact and ethics before starting the project. Post research with 226 individuals.

KEY FINDINGS
1. AI is commonly associated with job risks and unethical shortcuts.
2. Emotional responses ranged from fear to indifference.                    

INSIGHT
This motivated the creation of a personal, interactive experience to demystify AI by engaging users directly.
INSTALLATION DEMO VIDEO
COMPUTATIONAL DESIGN & USER RESEARCH

Python-Based Development
Custom application integrating multiple AI systems into a seamless, responsive user experience optimized for campus deployment.
AI Image Analysis
Cutting-edge natural language processing delivers nuanced, creative interpretations of user images in real-time.
AI Integration
Advanced image generation creates personalized visual outputs that transform user perceptions through creative AI applications.
Field Research
Comprehensive research methodology captured immediate reactions and emotional responses via interactive installation. 226 individuals participated.
REALTIME ENGAGEMENT DATA
IDEATION & PROTOTYPING: VISUAL AI
1. Concept: Users take photos; AI analyzes and generates descriptive text, then creates a new AI-generated image from that text.
2. Goal: Spark curiosity and reduce emotional distance by revealing AI’s visual interpretation process.​​​​​​​​​​​​​​
3. Prototyping Iterations: 
- Tested multiple AI pipelines (OpenCV, BLIP, ChatGPT, DALL·E versions). 
- Addressed hallucinations, clarity, generation speed, and cost.
4. Focus: Balanced quality, efficiency, and user experience for final system design.
PROTOTYPING PROCESS
I developed and tested 5 system versions using Python and multiple APIs:

Case #1
OpenCV + BLIP API Prompt → DALL·E 2 API
             Result: Generated captions were unclear, and the final images had little relevance to the user’s actual photo.
             Solution: Added ChatGPT API to refine captions before sending to DALL·E.

Case #2
OpenCV + BLIP + ChatGPT Prompt → DALL·E 2 API
             Result: Captions improved and images were generated, but AI hallucinations occurred frequently, producing inconsistent visuals.
             Solution: Replaced BLIP with CLIP API for better image-text alignment.

Case #3
OpenCV + CLIP + ChatGPT Prompt → DALL·E 2 API
             Result: The system worked, but was unstable. Redundant analysis from CLIP and ChatGPT caused inefficiency and DALL·E input errors.
             Solution: Simplified system by assigning analysis solely to ChatGPT and upgraded to DALL·E 3 for improved image quality.

Case #4
OpenCV + ChatGPT Prompt → DALL·E 3 API
             Result: High-quality and visually impressive output, but generation was slow and expensive (~$1/image).
             Solution: Sought alternatives to reduce cost and enhance responsiveness for interactive use.

Case #5
OpenCV + ChatGPT Prompt → DALL·E 2 API
             Result: Best balance of cost, speed, and output quality. Efficient enough for real-time gallery interaction.
             Solution: Final version deployed in the exhibition for public engagement.
IMPLEMENTATION & TESTING: ENGAGING REAL USERS
Setup: A Simple interface with a physical red button initiates user photo capture and AI processing.
Flow: Capture image -> Generate descriptive text from AI -> Create AI-generated image based on text ->Display results for user reflection
Privacy: Photos are not stored externally; they are overwritten during each session to protect user data and encourage participation.
Outcomes: 152 participants found the experience engaging, insightful, and less intimidating than expected.
SKILLS DEMONSTRATED
1. Technical implementation and debugging​​​​​​​
2. User experience and interaction design
3. Real-time user feedback integration​​​​​​​          
4. End-to-end solo project management   
ON-CAMPUS INSTALLATION

California State University, Fullerton. Marilyn and Cline Duff Gallery (March 2025)

This interactive installation, developed using technology, was exhibited in our university’s art gallery, maximizing exposure and encouraging participation from a diverse audience. As a result, participants had the opportunity to personally engage with AI and exchange thoughts and feedback with other visitors about the technology.
DATA-DRIVEN RESULTS
PRE-EXPERIENCE PERCEPTIONS OF AI
Before participants experienced the installation, we conducted a preliminary survey to understand their perceptions of AI. Among 226 respondents, 152 (67.3%) expressed negative views, 12 (5.3%) were unsure, and only 62 (27.4%) had a positive outlook. These results indicate that many participants approached the installation with skepticism or apprehension toward AI.
POST-EXPERIENCE PERCEPTIONS OF AI
However, after engaging with the installation, a noticeable shift in perception occurred. Of the 223 participants who responded to the post-experience survey, 152 (68.2%) reported a positive impression, 11 (4.9%) remained unsure, and 60 (26.9%) still expressed negative views. When asked to describe the experience, participants used words such as “fun” (138 responses), “scary” (81), “silly” (39), and “surprising” (6).
This data suggests that when AI is presented in a creative and interactive format that directly addresses user concerns, it can significantly improve public perception. Rather than simply conveying information, this installation demonstrated the power of experiential engagement in reshaping attitudes toward technology.

You may also like

Back to Top