AI Prototyping — Part 1 - Experimental, Illusory and Comparative Methods

Stepping into a future where our daily lives intertwine more deeply with machine intelligence, think about the scenes unfolding around us. Autonomous vehicles weave through the streets, mingling with people on foot and cars from the bygone era. Our homes are alive with IoT devices, quietly working away, sensing, and acting without needing a nudge from us. And at work, co-bots, those collaborative robots, are the new colleagues at our side. This blend of human and machine is reshaping not just how we live at home or work but also the fabric of our communities in ways we're just starting to grasp. And still, as we go through this shift, we're finding that our toolkit for understanding and shaping these changes is incomplete. It's clear that there's a pressing need for innovative and impactful ways to examine the effects of our design decisions in AI, to think through the ecosystems we're creating. This challenge is not just about the design – it's about rethinking our future in a way that includes AI.

In this article I will explore specialized instruments designed explicitly for artificial intelligence design frameworks.

THE CHALLENGE

As designers of user interaction, we are facing novel challenges when dealing with artificial intelligence. Mastering the foundational technologies behind AI is difficult, and figuring out the appropriate moments and methods for how and when to apply them is even more complex.

AI-based designs require extra consideration, and that translates to the tools selection that should enhance proficiency in using AI as a fundamental resource in design ventures. It's imperative for designers to cultivate an intuitive grasp of AI by experimenting, investigating, and constructing. This process of making AI more understandable fuels further efforts, design ideas, and approaches, thanks to designers' hands-on involvement with their tools of trade. Designers also possess unique perspectives and varied ethical considerations, even though most believe there is only one of such.

There's a scarcity of solutions aimed at the design of interaction patterns and behaviors when it comes to AI. The essence of AI-driven interactions is intricate, and it is emerging from self-sufficient, adaptive systems that demand regular oversight and adjustments from users. But the big questions revolve around the functionality of each AI implementation and its implications for humans.

For AI ecosystems to be functionally satisfying, we need to develop interactions that combine design aesthetics (or intelligence aesthetics?) that avoids overused narratives and storytelling. According to research, there are multiple categories design, narratives, behavior and such that have to be addressed.

FAST EXPERIMENTAL PROTOTYPING OF INTERACTIONS AND AUTONOMOUS BEHAVIORS

This is a methodological approach dedicated to the fast exploration and refinement of interactions and autonomous behaviors when we are dealing with diverse application contexts. Its methodology is based on the principle of rapid prototype generation, testing to swiftly identify potential deficiencies and innovate solutions. The is how the process looks like this:

WIZARD OF OZ (WOZ) DESIGN EXPERIMENTS AND TESTING WITH PEOPLE

This approach uses a façade of AI (the "wizard") to simulate interactions. This way designers get to to test hypotheses and gather user feedback without the need for fully developed AI systems.

The Wizard of Oz is a highly effective approach for exploring user interactions and experiences, but only in such a case if there is no immediate necessity for a fully developed AI system. The WOZ design can simulate AI behaviors and responses through a human operator (that’s who is referred to as the "wizard"), and hypotheses can be tested efficiently through user feedback. The process of design goes as following:

COMPARING DIFFERENT ALGORITHMS, DATASETS AND TRAINING METHODS

This way embodies a quintessential aspect in the discipline of artificial intelligence prototyping — an empirical assessment concerning the relative efficacy of disparate algorithmic frameworks, data corpora, and pedagogical mechanisms.

The core lies in a systematic examination of algorithmic performances, applicability and integrity of various datasets, and effectiveness of diverse training methodologies. A comparative scrutiny helps with engineering of AI systems in resilience but also to articulate objectives of the project clearly and precisely. The methodology of comparing different algorithms, datasets, and training methods unfolds through this process:

Conclusion


When machine intelligence becomes an integral part of our daily lives, we need a sophisticated toolkit to deal with it. Through the lenses of experimental design, illusionary interfaces, and empirical comparison — which are only some, and will be followed by others in the Part 2 — we've seen how they can have an effect on refining the use of AI.

The expanse of AI's application and its potential impact beckons for more inquiry. In the second part I will analyze 3 more prototyping approaches to explore other dimensions of AI design — stay tuned.

Title credits: Pixabay