Dr Charles Martin
Who has a question about assignment 2?
In scholarly writing, we need to support every statement we make. Support can be: either (1) a citation to a scholarly source or (2) evidence from a study. Where do you find these sources?
Careful:
Careful:
virtual.anu.edu.au
so that
you can access all papers.In this class, we are going to standardise on numerical citations. When you state something in your text that needs support, you put the citation at the end of the sentence [1].
- [1] Kazuhiro Wada, Masaya Tsunokake, and Shigeki Matsubara. 2025. Citation-Worthy Detection of URL Citations in Scholarly Papers. Proceedings of the 24th ACM/IEEE Joint Conference on Digital Libraries. Association for Computing Machinery, New York, NY, USA, Article 28, 1–5. <https://doi.org/10.1145/3677389.3702570>
BibTeX
formatBibTeX
is part of the venerable (La
)TeX
ecosystem
originally developed by CS luminary Donald Knuth in the late
70s@inproceedings{adiwangsa-charades:2025,
author = {Adiwangsa, Michelle and Bransky, Karla and Wood, Erika and Sweetser, Penny},
title = {A Game of ChARades: Using Role-Playing and Mimicry with and without Tangible Objects to Ideate Immersive Augmented Reality Experiences},
year = {2025},
isbn = {9798400714863},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3715668.3736382},
abstract = {Using tangible objects for immersive augmented reality (AR) experiences offers various benefits, such as providing a physical means of interacting with virtual objects and enhancing the functionality of everyday objects. However, designing AR experiences with tangible objects presents unique challenges, particularly due to the diverse physical properties that can influence user interactions. In this provocation, we explore effective approaches for ideating such AR experiences, by designing two exergames intended for AR head-mounted displays (HMDs). We found that role-playing and mimicry, both with and without tangible objects, provide valuable benefits in the design of such experiences. Building on this insight, we introduce ChARades, an iterative and playful gamestorming technique that incorporates role-playing and mimicry in both forms, to ideate immersive AR experiences involving tangible objects.},
booktitle = {Companion Publication of the 2025 ACM Designing Interactive Systems Conference},
pages = {440–445},
numpages = {6}
}
The metadata entry for Adiwangsa et al. (2025) which is cited using
[@adiwangsa-charades:2025]
in this Markdown document. In
LaTeX you would cite it like this:
\cite{adiwangsa-charades:2025}
.
Big reference libraries are part of academic work.
.bib
file has ~1300 entries (started in
2013, first year of Charles’ PhD!)references.bib
for this
course has about 100 entries.🙋🏽♀️🤷💁🏻🧠🗣️
Mucking up citations is a risk to lose marks unnecessarily.
Easy to do, but looks really bad to markers.
Let’s clear up some questions now if you have them!
I commit to finding ways to help you do this better!
How would you describe a computer interface?
graphical, command, speech, ambient, intelligent, tangible, touch free, natural, etc.
Focus of interface can change:
ls
)Ctrl + V
)delete
,
enter
, esc
) or user-defined:w
)Back in 1980s, much research investigated command interfaces’ optimisation:
Design principle: labeling/naming the commands should be chosen to be as consistent as possible!
A single interface combines different media such as graphics, text, video, sound, and links them together with various forms of interactivity. E.g., Wikipedia.
Interfaces can sit on a spectrum between fully virtual and fully real interaction (Milgram & Kishino, 1994).
Has often seemed like a great idea, but hasn’t cracked mainstream yet (or has it??)
Emerged in 1970s with computer-generated graphical simulations.
Goal: to create user experiences that feel real when interacting with an artificial environment.
🙋🏽♀️🤷💁🏻🧠🗣️
Find someone near you, and discuss these questions:
Chat for 3 minutes and we’ll hear a few responses.
Much of the content on a web page is not read. Web designers are “thinking great literature” (or at least “product brochure”), while the viewer’s reality is much closer to a “billboard going by at 60 miles an hour” (Krug et al., 2014).
Smartphones, fitness trackers, smartwatches, large-sized tablets on the flight, educational tablets, etc.
What conversational mechanisms to use to structure the voice interface and how human-like they should be?
Pros and cons of dialogue structures, error handling, and etiquette remain key for modern voice interfaces (Cohen et al., 2004).
Pens are great! We should do more with them.
E.g., Kinect for Xbox — combined RGB camera, depth sensor, and microphones for real-time gesture and voice recognition.
must recognise multiple user aspects: handwriting, speech, gestures, eye movements, and body movements.
Design guidelines, see (Oviatt et al., 2017).
Designed for multi-user interaction, unlike typical single-user devices (PCs, laptops, phones).
multiple simultaneous inputs by co-located groups. E.g., large wall displays (e.g., SmartBoards); interactive tabletops for users to interact using fingertips on a shared surface.
from single-device interactions (e.g.,handwriting) to cross-device collaboration; key design issues around display layout, participation, and balancing personal and shared spaces.
Link physical objects (bricks, cubes, clay) with digital representations through embedded sensors (Fishkin, 2004; Ishii & Ullmer, 1997)
Manipulating objects triggers digital effects like sounds, animations, or lights, either on the object or in surrounding media (e.g., Tangible Bits (Ishii & Ullmer, 1997)).
Some use physical models on digital surfaces (tabletops), where moving objects influences digital events (e.g., Urp for urban planning).
Physical artifacts can be lifted, rearranged, and manipulated directly, distinguishing from purely screen-based or mobile interfaces.
Robots assist in manufacturing, hazardous environments, rescue, and remote exploration with remote controls using cameras and sensors.
Domestic robots help with chores and support elderly or disabled people.
Pet robots like Paro provide companionship and reduce loneliness, especially for dementia patients.
Drones, once military and hobbyist tools, now serve in delivery, entertainment, agriculture, construction, and wildlife monitoring, offering real-time data and automated operations.
Create a link between brain activity and external devices (e.g., cursor, robot, game controller).
Detect neural signals via electrodes in headsets placed on the scalp.
Assist or augment cognitive and motor functions—especially for people with disabilities (e.g., BrainGate lets paralyzed users type via thought).
…and entertainment – e.g., Brainball, where relaxation controls a ball’s movement.
Aim to transfer mental states (e.g., “focused,” “relaxed”) between people via stimulation.
Ethical concerns arise around mind reading and mental state manipulation, especially with companies like Neuralink pursuing direct brain implants.
🙋🏽♀️🤷💁🏻🧠🗣️
Imagine you are developing a new interface for the classic game “connect four”
Chat for 3 minutes with someone next to you then let’s hear some ideas.
What makes these interfaces good or bad?
How would you choose what interface to use?
A natural user interface (NUI) is designed to allow people to interact with a computer in the same way that they interact with the physical world—using their voice, hands, and bodies.
Don Norman (Norman, 2013) argues “natural” depends on a number of factors:
A gesture may worth a thousand words; other times a word is worth a thousand gestures. It depends on how many functions the system supports.
It depends!
…the interplay of a number of factors, including type of task, the people using the system, context, reliability, social acceptability, privacy, ethical, and location concerns.
Who has a question?