Interfaces

Dr Charles Martin

Announcements

  • Assignment 2 template and rubric updated
  • Next two tutorials include assignment support baked into the activities.
  • Still processing a few queries related to Assignment 1, should be finished soon.
  • Final project will be released soon. Likely to involve thinking about alternative interfaces so this is an important lecture!

Who has a question about assignment 2?

Plan for the class

  • Research skills (how to use Google Scholar and cite references)
  • Overview of the diversity of interfaces in use and in research!
  • Outline key design and research considerations
  • Think about natural user interfaces
  • Think about which interface is best for a given application or activity

Research Skills

Finding information when you need it (Source: Charles 2024)

Finding a source

  • Assignments in this class require you to make independent choices
  • Your choices should be backed by a source of knowledge, e.g.,
    • what abilities does a particular animal have?
    • what research plan makes sense for particular interface?
    • why do human-AI interfaces have usability problems?

In scholarly writing, we need to support every statement we make. Support can be: either (1) a citation to a scholarly source or (2) evidence from a study. Where do you find these sources?

Google Scholar

  • search engine for scholarly sources
  • add more search terms to get more specific
  • you can use the “time” selector on the left column to find recent work

Careful:

  • Google Scholar indexes anything that looks like scholarly research (e.g., any PDFs on a website in a conference/journal format). Need to use critical thinking to decide whether sources are good quality or not.
  • Google Scholar can give you a formatted citation but it may not have all information (e.g., URL, DOI)
Finding sources with Google Scholar

ACM Digital Library

  • ACM Digital Library archives proceedings of ACM conferences and journals.
  • not as good at searching, but will show you only peer-reviewed works

Careful:

  • Works best on-campus or via virtual.anu.edu.au so that you can access all papers.
  • CHI and ACM are centres of HCI research, but there are other non-ACM venues that could be missed.
Finding sources with ACM Digital Library

Citations

In this class, we are going to standardise on numerical citations. When you state something in your text that needs support, you put the citation at the end of the sentence [1].

References

  • [1] Kazuhiro Wada, Masaya Tsunokake, and Shigeki Matsubara. 2025. Citation-Worthy Detection of URL Citations in Scholarly Papers. Proceedings of the 24th ACM/IEEE Joint Conference on Digital Libraries. Association for Computing Machinery, New York, NY, USA, Article 28, 1–5. https://doi.org/10.1145/3677389.3702570

Write the references in Markdown like this:

- [1] Kazuhiro Wada, Masaya Tsunokake, and Shigeki Matsubara. 2025. Citation-Worthy Detection of URL Citations in Scholarly Papers. Proceedings of the 24th ACM/IEEE Joint Conference on Digital Libraries. Association for Computing Machinery, New York, NY, USA, Article 28, 1–5. <https://doi.org/10.1145/3677389.3702570>

Metadata vs citation format

  • metadata for a reference can be used with any citation format
  • academics often use special tools for storing reference metadata
  • computer scientists tend to use BibTeX format
  • BibTeX is part of the venerable (La)TeX ecosystem originally developed by CS luminary Donald Knuth in the late 70s
  • in LaTeX you can select citation format (e.g., ACM, IEEE, Chicago, Harvard, APA)
@inproceedings{adiwangsa-charades:2025,
author = {Adiwangsa, Michelle and Bransky, Karla and Wood, Erika and Sweetser, Penny},
title = {A Game of ChARades: Using Role-Playing and Mimicry with and without Tangible Objects to Ideate Immersive Augmented Reality Experiences},
year = {2025},
isbn = {9798400714863},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3715668.3736382},
abstract = {Using tangible objects for immersive augmented reality (AR) experiences offers various benefits, such as providing a physical means of interacting with virtual objects and enhancing the functionality of everyday objects. However, designing AR experiences with tangible objects presents unique challenges, particularly due to the diverse physical properties that can influence user interactions. In this provocation, we explore effective approaches for ideating such AR experiences, by designing two exergames intended for AR head-mounted displays (HMDs). We found that role-playing and mimicry, both with and without tangible objects, provide valuable benefits in the design of such experiences. Building on this insight, we introduce ChARades, an iterative and playful gamestorming technique that incorporates role-playing and mimicry in both forms, to ideate immersive AR experiences involving tangible objects.},
booktitle = {Companion Publication of the 2025 ACM Designing Interactive Systems Conference},
pages = {440–445},
numpages = {6}
}

The metadata entry for Adiwangsa et al. (2025) which is cited using [@adiwangsa-charades:2025] in this Markdown document. In LaTeX you would cite it like this: \cite{adiwangsa-charades:2025}.

Big BibTeX files

Big reference libraries are part of academic work.

  • Charles’ “main” .bib file has ~1300 entries (started in 2013, first year of Charles’ PhD!)
  • references.bib for this course has about 100 entries.
  • Charles uses BibDesk to manage the big file, but usually just VSCode for smaller bibtex libraries.
  • Other popular reference managers are Zotero, Mendeley and Endnote.
  • Opinion time: Bibdesk is nice because the database is just a text file.
References listed in Bibdesk

In this class…

  • Not expecting you to use BibTeX (yet)
  • Expecting you to list references in “ACM format”.
  • ACM format is inspired by APA referencing
    • supports both numerical [1, 2] and author-date (Martin, 2020) referencing.
    • includes full names of authors and publication details for clarity
    • includes DOI as a full URI
  • ACM page plain text examples for references to different types of source
  • CSL version (Citation Style Langauge)
  • BST version (BibTeX Style)

Rules for this course:

  1. all references must exist! (!!!)
  2. use ACM format
  3. use numerical citation (not author date) – saves words in your word count
  4. all references in your list should be cited in text
  5. you have actually read references and that they are relevant to your work
  6. at least two references should be scholarly (so not a medium article) and external (so not course lecture notes or textbook)
  7. expectation: ✨✨perfection✨✨

Who has a question about referencing and finding sources?

🙋🏽‍♀️🤷💁🏻🧠🗣️

Mucking up citations is a risk to lose marks unnecessarily.

Easy to do, but looks really bad to markers.

Let’s clear up some questions now if you have them!

I commit to finding ways to help you do this better!

Interface Types

How would you describe a computer interface?

graphical, command, speech, ambient, intelligent, tangible, touch free, natural, etc.

Focus of interface can change:

  • function e.g., smartphones
  • interaction style used e.g., command, graphical or multimedia
  • input/output device e.g., pen-based, speech-based, or gesture-based
  • platform e.g., tablet, mobile, PC, or wearable
Yichen Wang using a touchless, natural, AR interface, 2021.

45 years of interface types!

  • Command Line
  • Graphical
  • Multimedia
  • Virtual reality
  • Web
  • Mobile
  • Appliance
  • Voice
  • Pen
  • Touch
  • Touchless
  • Haptic
  • Multimodal
  • Shareable
  • Tangible
  • Augmented reality
  • Wearables
  • Robots and drones
  • Brain-computer
  • Smart
  • Shape-changing
  • Holographic

Command Line Interfaces

  • Type in commands (e.g., ls)
  • pressing certain combinations of keys (e.g., Ctrl + V)
  • fixed from the keyboard (e.g., delete, enter, esc) or user-defined
  • largely superseded by graphical interfaces such as menus, icons, predictable text commands
  • still useful for complex software (e.g., CAD), scripting batch operations, coding
  • is chatGPT a CLI?
Charles editing a lecture in neovim (:w)

Research and Design for CLIs

Back in 1980s, much research investigated command interfaces’ optimisation:

  • form of the commands such as the use of abbreviations, full names, and familiar names;
  • syntax (e.g., how best to combine different commands), and organisation (e.g., how to structure options), are examples of some of the main areas that have been investigated (Scneiderman, 1992).
  • Findings showed no universal optimal methods on command naming!

Design principle: labeling/naming the commands should be chosen to be as consistent as possible!

Graphical User Interfaces

  • Information represented within a graphical interface
  • use of color, typography, and imagery (Mullet & Sano, 1996)
  • interface features abbreviated as WIMP
    • Windows
    • Icons
    • Menus
    • Pointer
Apple’s first GUI: Lisa. Source: The Lisa: Apple’s Most Influential Failure, Computer History Museum.

Research and Design for GUIs

  • window management
    • enabling fluid movement and rapid attention shifts between windows and displays without distraction
    • e.g., keyboard shortcuts and task bars design; auto-fill in online forms
  • menu design consideration: decide which terms to use for menu options
  • consistent icon libraries for developer: e.g., fontawesome.com or thenounproject.com

Multimedia

A single interface combines different media such as graphics, text, video, sound, and links them together with various forms of interactivity. E.g., Wikipedia.

  • better information presentation.
  • facilitate rapid access to multiple representations of information.
multimedia learning app for tablets. Source: KidsDiscover “Roman Empire for iPad”

Multimedia

  • developed for training, educational, and entertainment purposes.
  • To what extent do multimedia interfaces improve learning and play?
  • What happens when users have unlimited access to diverse media and simulations?

Research and Design Considerations

  • How to encourage interaction with all aspects?
  • provide a diversity of hands-on interactivities and simulations
  • employ dynalinking, where changes in one window directly update another (Rogers & Aldrich, 1996).
  • how to best combine multiple media to support different kinds of tasks?

Augmented and Virtual Reality

Interfaces can sit on a spectrum between fully virtual and fully real interaction (Milgram & Kishino, 1994).

  • The big middle area includes “mixed reality” (MR) interfaces
  • Augmented reality usually closer to “real” reality.
  • eXtended reality (XR) is a more recent term.

Has often seemed like a great idea, but hasn’t cracked mainstream yet (or has it??)

The reality-virtuality continuum

Augmented Reality

  • blending of digital content with the physical world to create an enhanced real-world experience
  • 1960s: Ivan Sutherland’s development of the first head-mounted three-dimensional display
First augmented reality head-mounted display system. Source: (Sutherland, 1968).

Augmented Reality

  • AR systems have evolved significantly, particularly displays and interaction models (Billinghurst et al., 2015; Speicher et al., 2019)
  • displays: see-through, screen-based, projection-based.
  • “spatial Computing”, another way of thinking about it, defined by Greenwold (2003)
    • “human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces.”
  • emphasising not only the augmentation of reality but also the meaningful interaction between digital and physical elements.
AR smartphone game Pokémon Go.
AR musical instrument (Yichen Wang, 2022).

Virtual Reality

Emerged in 1970s with computer-generated graphical simulations.

Goal: to create user experiences that feel real when interacting with an artificial environment.

  • stereoscopically displayed image.
  • interact with objects through input devices such a joystick within the field of vision.
  • higher level of fidelity compared to other graphical interfaces, provides immersion.
  • different viewpoints: first-person perspective, third-person perspective, etc.
Avatars for the “We Wait” VR experience (Steed et al., 2018) (CC-BY 4.0)

Research and Design for Virtual Reality

  • learning and training for specific skills.
    • driving, pilot training, surgery, medicine, complex/confronting environments.
    • build up skills with lower costs and low risk.
  • navigation for accessibility, treatment (e.g, mental health)
  • entertainment.
  • virtual body to enhance the feeling of presence; reduce cybersickness; support natural user experience; the level of realism to target, etc.
iFire Networked Visualisation System. UNSW Center for Interactive Cinema Research, 2025

Activity: Thinking about AR and VR

🙋🏽‍♀️🤷💁🏻🧠🗣️

Find someone near you, and discuss these questions:

  1. Have you ever tried an AR VR system? (phone, headset, audio-only, whatever)
  2. What was the experience like? Do you see it as a useful technology?

Chat for 3 minutes and we’ll hear a few responses.

Website Design

  • early websites were largely text-based, with hyperlinks to different places or pages of text.
    • “how best to structure information at the interface level to enable users to navigate and access it easily and quickly?”
  • shift from information usability to aesthetics and visual design.

Much of the content on a web page is not read. Web designers are “thinking great literature” (or at least “product brochure”), while the viewer’s reality is much closer to a “billboard going by at 60 miles an hour” (Krug et al., 2014).

Website Design

  • web development involves multiple technologies
  • CSS, HTML, JavaScript, node.js, Python, etc.
  • breadcrumb navigation (key interface element): “way finding” to navigate without losing track
  • design for smartphone or table interaction modality, for smaller-sized displays and for infinite scrolling.
  • three core questions proposed by Keith Instone: “Where am I? What’s here? Where can I go?” (Veen, 2000)
Web Development tools
A breadcrumb trail on the Best Buy website showing three choices made by the user to get to Smart Lights (source)

Mobile Device

Smartphones, fitness trackers, smartwatches, large-sized tablets on the flight, educational tablets, etc.

  • embedded sensors, such as accelerometer for movement detection, thermometer for temperature measurement, bio/fitness sensors.
  • new affordances led to novel and creative apps e.g.,
Mobile Devices. Source: StockCake.
Ocarina, Ge Wang. 2014.

Research and Design: Mobile Interfaces

  • careful design of limited screen and control space, including the selection, placement, and software integration of hardware controls.
  • ensuring touch targets like buttons and icons are large enough for accurate use by all finger sizes.
  • other guidelines exist providing advice on how to design interfaces for mobile devices, e.g., (Babich, 2018).

Appliances

  • machines for everyday use in the home
  • washing machines, microwaves, refrigerators, toasters, bread makers, blenders
  • connections to the internet, control by remote apps
  • do such designs help?
  • Cooper et al. (2014) suggest that appliance interfaces require the designer to view them as transient interfaces, where the interaction is short.
  • Two fundamental design principles: simplicity and visibility.
The Design of Everyday Things. Norman (2013)

Voice Interface

  • lets users interact with apps (e.g., search engines, chatbots, or travel planners) through spoken language
  • commonly used to request information (like flight times or weather) or issue commands (e.g., playing music or selecting a movie)
  • Voice interfaces rely on command- or conversation-based interaction.
  • early speech systems earned a reputation for mishearing all too often what a person said (still true?)
Apple Siri. Source: CHEEZburger Memebase.

Applications for Voice Interfaces

  • Dictation (e.g., Otter.io, Dragon), faster than typing, accessibility.
  • Call routing: Automate customer service; saves costs; needs human fallback when needed.
  • Barge-in feature: Lets users interrupt system prompts to speed up interactions.
  • Directed dialogue: System asks specific questions; user gives limited responses.
  • Flexible input risks: Users may give too much info; guided prompts help keep input manageable.
  • Mobile speech apps: Used for voice search, translation (e.g., Siri, Google Translate); enables real-time multilingual conversations.
  • Voice assistants: Amazon’s Alexa and Google Home offer interactive skills; promote shared family use and entertainment.
  • Current limits: Struggles with kids’ speech, group speaker recognition, and requires name activation.

Research and Design for Voice Interaction

What conversational mechanisms to use to structure the voice interface and how human-like they should be?

  • natural conversation
  • system navigation efficiency
  • synthesised voice or voice actor: male, female, neutral, dialect, and pronunciation

Pros and cons of dialogue structures, error handling, and etiquette remain key for modern voice interfaces (Cohen et al., 2004).

Tea. Earl grey. Hot. –Captain Picard

Pen-Based

  • light pens, styluses, or scanners for drawing and writing
  • write, draw, select, and move objects on a page or tablet.
  • very early use in CRT displays with lightpen!
  • touchscreen versions most common now
  • another option: digital paper that scans what you write (IR sensor, special dot pattern on the paper)

Pens are great! We should do more with them.

Livescribe Echo 2 Smartpen (source).

Touchscreens

  • common in kiosks, ATMs, checkouts, phones, tablets, computers.
  • Now weird to find a non touchscreen.
  • Single-touch respond to single taps.
  • Multitouch supports multiple simultaneous touches and gestures like pinching, rotating
  • Multitouch devices (e.g., smartphones, tablets, tabletops) enable intuitive interactions using one or both hands.
  • Finger gestures enhance how users interact
  • but there are limitations. How do you “undo” on an iPhone?
A schematic of a multitouch interface. Source: Willtron / CC BY 1.0

Touchless

  • gestural interaction: moving arms, hands, position to communicate
  • track and understand gestures using cameras, sensors and machine learning
  • Watch David Rose’s inspirations for gesture at vimeo.com/224522900

Research and Design for Touchless Interfaces

  • Gesture recognition challenge: Systems must detect when a gesture starts and ends, and distinguish intentional gestures (e.g., pointing) from unconscious movements (e.g., hand waving).
  • Gestures as output: Gestures can also be visualised, such as through avatars mirroring user movements in real time.
  • 3D sensing: Devices with depth cameras (e.g., smartphones, laptops, smart speakers) can capture and respond to gestures in 3D space.
  • Design consideration: How realistic the avatar or mirrored representation must be for users to feel it’s believable and connected to their own gestures?

Haptic Interfaces

  • vibration and forces (via actuators) to provide tactile feedback
  • can be embedded in clothing or mobile devices such as smartphones and watches.
  • gaming consoles and driving simulators use haptics to enhance realism.
  • vibrotactile feedback can simulate remote physical communication, like hugs or squeezes, using actuators in clothing.
  • Haptics can also be used for skill training, such as learning musical instruments.
    • “novice players responded well to vibrotactile cues, adjusting their actions accordingly.”
The MusicJacket with embedded actuators that nudge the player to move their arm up to be in the correct position. Source: Yvonne Rogers

Haptic Interfaces

  • Ultrahaptics uses ultrasound to create 3D shapes and textures in midair that can be felt but not seen.
  • It simulates touch-based interfaces (e.g., buttons, sliders) that appear in midair.
  • In automotive interfaces to replace physical controls with invisible, tactile controls: adjust volume or change radio stations.
  • Haptic Exoskeletons: bedded into wearable exoskeletons, inspired by “Techno Trousers” from Wallace and Gromit (Rossiter, 2018).
    • Graphene parts are used to stiffen or relax the trousers to assist movement.
    • Application in mobility assistance, fitness.
Trousers with artificial muscles that use a new kind of bubble haptic feedback. Source: The Right Trousers Project

Multimodal

  • multiple input/output modalities (e.g., touch, sight, sound, speech) to enhance user interaction (Bouchet & Nigay, 2004);
  • natural, flexible, and expressive interactions, similar the real world interaction experience (Oviatt et al., 2017).
  • Common combinations: speech and gesture; eye-gaze and gesture; haptic and audio, pen input and speech (Dumas et al., 2009).
  • Speech + vision processing is the most common (Deng & Huang, 2004)!

E.g., Kinect for Xbox — combined RGB camera, depth sensor, and microphones for real-time gesture and voice recognition.

Microsoft’s Xbox Kinect. Source: Stephen Brashear / Invision for Microsoft / AP Images

Research and Design for Multimodal Interfaces

must recognise multiple user aspects: handwriting, speech, gestures, eye movements, and body movements.

  • more complex to design than single modality systems.
  • most researched interaction modes are: speech, gesture, eye-gaze tracking.
  • What are the actual benefits of combining multiple input/output modalities?
  • Is natural human-like interaction (e.g., talk + gesture) effective when applied to computer interaction?

Design guidelines, see (Oviatt et al., 2017).

Multimodal Mixed Reality Experience. Microsoft HoloLens 2. Source: Charles Martin. 2023.

Shareable

Designed for multi-user interaction, unlike typical single-user devices (PCs, laptops, phones).

multiple simultaneous inputs by co-located groups. E.g., large wall displays (e.g., SmartBoards); interactive tabletops for users to interact using fingertips on a shared surface.

  • A large shared space for group collaboration.
  • Allow simultaneous interaction, unlike working on a single PC.
  • Users can point, touch, and see the same content.
  • This creates a shared point of reference, enhancing group coordination and participation (Rogers et al., 2009).
Collaborative Musical Instrument Reactable (Jordà, 2010). (source)[https://www.ycam.jp/en/archive/works/reactable/].

Shareable Research Considerations

  • tabletop systems used in museums and galleries.
  • support interactive, group-based learning (Clegg et al., 2020).
  • some shareable interfaces are software platforms for remote collaboration.
  • Early example: ShRedit (1990s) – supported shared document editing.
  • Google Docs, Microsoft Excel, Miro, Canva, etc.

Research and Design Consideration

from single-device interactions (e.g.,handwriting) to cross-device collaboration; key design issues around display layout, participation, and balancing personal and shared spaces.

Tangible

Link physical objects (bricks, cubes, clay) with digital representations through embedded sensors (Fishkin, 2004; Ishii & Ullmer, 1997)

Manipulating objects triggers digital effects like sounds, animations, or lights, either on the object or in surrounding media (e.g., Tangible Bits (Ishii & Ullmer, 1997)).

Some use physical models on digital surfaces (tabletops), where moving objects influences digital events (e.g., Urp for urban planning).

Physical artifacts can be lifted, rearranged, and manipulated directly, distinguishing from purely screen-based or mobile interfaces.

Tangible Bits. (Ishii & Ullmer, 1997)

Research and Design for Tangible Interfaces

  • Conceptual frameworks e.g., (Fishkin, 2004; Shaer et al., 2010; Ullmer et al., 2005) identify what makes tangible interfaces unique.
  • A main design challenge is deciding how to link physical actions to digital effects - the notion of coupling.
  • Designers must choose the types and where digital feedback appears (e.g., on the object, beside it, or elsewhere), based on the interface’s purpose. E.g., learning? entertainment?
  • Choosing the right physical artifacts (e.g., cubes, tokens)to support natural and hands-on interaction.
  • Simple materials like sticky notes or cardboard tokens can be used to link physical actions to digital responses.
  • history and goals of tangible interfaces, see: Ullmer et al. (2022)

Wearables

  • Digital devices worn on the body – e.g., smartwatches, fitness trackers, smart glasses, and fashion tech.
  • Early wearable computing enabled mobile recording and access to digital info.
  • Advances in flexible displays, e-textiles, and physical computing (e.g., Arduino) have made wearables more practical and appealing.
  • Items like jackets, jewelry, hats, and shoes have been designed to interact with digital info on the go.
  • From convenience design focus to expressive and communicative functions.
LivingLoom (Zhu et al., 2025).

Robots and Drones

Robots assist in manufacturing, hazardous environments, rescue, and remote exploration with remote controls using cameras and sensors.

Domestic robots help with chores and support elderly or disabled people.

Pet robots like Paro provide companionship and reduce loneliness, especially for dementia patients.

Drones, once military and hobbyist tools, now serve in delivery, entertainment, agriculture, construction, and wildlife monitoring, offering real-time data and automated operations.

(a) Mel, the penguin robot, designed to host activities; (b) Japan’s Paro, an interactive seal, designed as a companion, primarily for the elderly and sick children. Source: (a) Mitsubishi Electric Research Labs (b) Parorobots.com.

Brain-Computer Interfaces

Create a link between brain activity and external devices (e.g., cursor, robot, game controller).

Detect neural signals via electrodes in headsets placed on the scalp.

Assist or augment cognitive and motor functions—especially for people with disabilities (e.g., BrainGate lets paralyzed users type via thought).

…and entertainment – e.g., Brainball, where relaxation controls a ball’s movement.

Aim to transfer mental states (e.g., “focused,” “relaxed”) between people via stimulation.

Ethical concerns arise around mind reading and mental state manipulation, especially with companies like Neuralink pursuing direct brain implants.

Source: Tim Cordell / Cartoon Stock

Smart Interfaces

  • Smart devices (phones, homes, appliances) are context-aware, often AI-powered, and network-connected, learning from user behaviour (e.g., Nest thermostat).
  • Aim is to automate tasks, improve efficiency, and reduce human error—often by removing humans from the loop (e.g., smart buildings managing lighting/heating).
  • over-automation can frustrate users, limiting control (e.g., sealed windows, restricted manual overrides).

Shape Changing Interfaces

  • Use physical form changes as input/output (e.g., 3D bar charts moving rods to show data) (Alexander et al., 2018).
  • Enable tactile interaction beyond screens
  • Make data more relatable by embedding it in everyday contexts

Research and Design Consideration

  • Do they improve data understanding and engagement?
  • Are they better than 2D/3D digital charts for health monitoring?
  • Size of rod grids; number of cubes for easy learning and recognition.
inFORM: A shape-changing interface that uses a series of motor-controlled pins to render digital content in the form of 3D rods; developed by MIT Media Group. Source: http://trackr-media.tangiblemedia.org/publishedmedia/Projects/2013-inFORM/inFORM%20Collection/4676

Holographic Interfaces

  • Create the illusion of a 3D person being present through taking advantage of the human perceptual system.
  • The projection and display technology have achieved some convincing results (e.g., ABBA Voyage in London).

Research and Design Consideration

  • A lot research conducted in the tech industry exploringhow to represent people in virtual spaces in ways that feel natural, comfortable, engaging, and not creepy.
  • Design considerations include hologram size and how users can interact and communicate with projections in their space.
ABBA Voyage holographic show. 2022. Source: The Guardian.

Activity: Choosing an Interface

🙋🏽‍♀️🤷💁🏻🧠🗣️

Imagine you are developing a new interface for the classic game “connect four”

  1. what interface what would you choose?
  2. how do you think it would evaluate in terms of usability and user experience?

Chat for 3 minutes with someone next to you then let’s hear some ideas.

The classic game, COnnect Four. By Jonathan Kellenberg, Flickr, CC BY 2.0

Coda

What makes these interfaces good or bad?

How would you choose what interface to use?

Natural User Interfaces and Beyond

A natural user interface (NUI) is designed to allow people to interact with a computer in the same way that they interact with the physical world—using their voice, hands, and bodies.

  • but how natural are NUIs?

Don Norman (Norman, 2013) argues “natural” depends on a number of factors:

  • how much learning is required,
  • the complexity of the app or device’s interface,
  • and whether accuracy and speed are needed.

A gesture may worth a thousand words; other times a word is worth a thousand gestures. It depends on how many functions the system supports.

PhD student Sandy Ma drawing-based musical performance in AR environment. 2024.

Which Interface?

  • is multimedia better than tangible interfaces for learning?
  • Is voice effective as a command-based interface?
  • Is a multimodal interface more effective than a single media interface?
  • Are wearable interfaces better than mobile interfaces for helping people find information in foreign cities?
  • How does VR differ from AR, and which is the ultimate interface for playing games? etc.

It depends!

…the interplay of a number of factors, including type of task, the people using the system, context, reliability, social acceptability, privacy, ethical, and location concerns.

Questions: Who has a question?

Who has a question?

  • I can take cathchbox question up until 2:55
  • For after class questions: meet me outside the classroom at the bar (for 30 minutes)
  • Feel free to ask about any aspect of the course
  • Also feel free to ask about any aspect of computing at ANU! I may not be able to help, but I can listen.
Meet you at the bar for questions. 🍸🥤🫖☕️ Unfortunately no drinks served! 🙃

References

Adiwangsa, M., Bransky, K., Wood, E., & Sweetser, P. (2025). A game of ChARades: Using role-playing and mimicry with and without tangible objects to ideate immersive augmented reality experiences. Companion Publication of the 2025 ACM Designing Interactive Systems Conference, 440–445. https://doi.org/10.1145/3715668.3736382
Alexander, J., Roudaut, A., Steimle, J., Hornbæk, K., Bruns Alonso, M., Follmer, S., & Merritt, T. (2018). Grand challenges in shape-changing interface research. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–14.
Babich, N. (2018). The do’s and don’ts of mobile UX design. https://theblog.adobe.com/10-dos-donts-mobile-ux-design.
Billinghurst, M., Clark, A., & Lee, G. (2015). now. https://doi.org/10.1561/1100000049
Bouchet, J., & Nigay, L. (2004). ICARE: A component-based approach for the design and development of multimodal interfaces. CHI’04 Extended Abstracts on Human Factors in Computing Systems, 1325–1328.
Clegg, T., Boston, C., Preece, J., Warrick, E., Pauw, D., & Cameron, J. (2020). Community-driven informal adult environmental learning: Using theory as a lens to identify steps toward concientización. The Journal of Environmental Education, 51(1), 55–71.
Cohen, M. H., Giangola, J. P., & Balogh, J. (2004). Voice user interface design. Addison-Wesley Professional.
Cooper, A., Reimann, R., Cronin, D., & Noessel, C. (2014). About face: The essentials of interaction design. John Wiley & Sons.
Deng, L., & Huang, X. (2004). Challenges in adopting speech recognition. Communications of the ACM, 47(1), 69–75.
Dumas, B., Lalanne, D., & Oviatt, S. (2009). Multimodal interfaces: A survey of principles, models and frameworks. In Human machine interaction: Research results of the mmi program (pp. 3–26). Springer.
Fishkin, K. P. (2004). A taxonomy for and analysis of tangible interfaces. Personal and Ubiquitous Computing, 8(5), 347–358.
Greenwold, S. (2003). Spatial computing [Master’s thesis]. School of Architecture; Planning, Massachusetts Institute of Technology.
Ishii, H., & Ullmer, B. (1997). Tangible bits: Towards seamless interfaces between people, bits and atoms. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, 234–241.
Jordà, S. (2010). The reactable: Tangible and tabletop music performance. In CHI’10 extended abstracts on human factors in computing systems (pp. 2989–2994).
Krug, S. et al. (2014). Don’t make me think, revisited. A Common Sense Approach to Web and Mobile Usability, 3.
Milgram, P., & Kishino, F. (1994). A taxonomy of mixed reality visual displays. IEICE TRANSACTIONS on Information, E77-D(12), 1321–1329.
Mullet, K., & Sano, D. (1996). Designing visual interfaces. Acm Sigchi Bulletin, 28(2), 82–83.
Norman, D. A. (2013). The design of everyday things: Revised and expanded edition. Basic Books.
Oviatt, S., Schuller, B., Cohen, P., Sonntag, D., & Potamianos, G. (2017). The handbook of multimodal-multisensor interfaces, volume 1: Foundations, user modeling, and common modality combinations. Morgan & Claypool.
Rogers, Y., & Aldrich, F. (1996). In search of clickable dons: Learning about HCI through interacting with norman’s CD-ROM. ACM SIGCHI Bulletin, 28(3), 44–47.
Rossiter, D. G. (2018). Past, present & future of information technology in pedometrics. Geoderma, 324, 131–137.
Scneiderman, B. (1992). Designing the user interface: Strategies for effective human-computer interaction. Addison-Wesley.
Shaer, O., Hornecker, E., et al. (2010). Tangible user interfaces: Past, present, and future directions. Foundations and Trends® in Human–Computer Interaction, 3(1–2), 4–137.
Speicher, M., Hall, B. D., & Nebeling, M. (2019). What is mixed reality? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–15.
Steed, A., Pan, Y., Watson, Z., & Slater, M. (2018). “We wait”—the impact of character responsiveness and self embodiment on presence and interest in an immersive news experience. Frontiers in Robotics and AI, 5. https://doi.org/10.3389/frobt.2018.00112
Sutherland, I. E. (1968). A head-mounted three dimensional display. Proceedings of the December 9-11, 1968, Fall Joint Computer Conference, Part i, 757–764. https://doi.org/10.1145/1476589.1476686
Ullmer, B., Ishii, H., & Jacob, R. J. (2005). Token+ constraint systems for tangible interaction with digital information. ACM Transactions on Computer-Human Interaction (TOCHI), 12(1), 81–118.
Ullmer, B., Shaer, O., Mazalek, A., & Hummels, C. (2022). Weaving fire into form: Aspirations for tangible and embodied interaction. Morgan & Claypool.
Veen, J. (2000). The art and science of web design. Pearson Education.
Wang, G. (2014). Ocarina: Designing the iPhone’s magic flute. Computer Music Journal, 38(2), 8–21.
Zhu, J., Chang, S., Zhao, R., & Kao, C. H.-L. (2025). LivingLoom: Investigating human-plant symbiosis through integrating living plants into (e-)textiles. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3706598.3713156