A number of people have asked for suggestions for project ideas. I've put together this list of possible ideas, including projects that people have done in the past. Please don't consider yourself limited to just these ideas--you're welcome to come up with cool UI techniques on your own. This list is just meant to convey the scope and scale of what we're looking for. Also note that short descriptions like this merely convey the topic area of possible projects, they don't describe all of the details about what would be sufficient to get a good grade on the project. (In other words, for any of these it's possible to do a very bad version or a very good version--simply choosing a topic on this list is insufficient to guarantee any particular grade.)
In general, good project ideas will represent a solid chunk of implementation work (at least equivalent to three homework assignments, so think at about this scale).
It'll hopefully represent a novel interaction technique, or at least a novel application of an existing interaction technique. For example, just reimplementing Pie Menus is not very novel, since it's been done tons of time. Beyond that, Pie Menus are probably of insufficient complexity to make a good project (they're probably less than one homework's worth of effort). So, you do a pie menu project at your own peril! :-)
Here are some projects that have been done in the past, as well as potential project ideas from past TAs and myself. Please note that the
project requirements may have changed somewhat from previous years, so these ideas may need to be tweaked slightly to meet this
Finally, here are some projects I would suggest NOT doing, along with my rationale.
- Create an application that recognizes, "beautifies," and interprets various hand-drawn geometric shapes. This might be used as a drawing cleanup tool (see Igarashi's Pegasus system), or with the figures interpreted in some way. For example, see Adobe's "Comp CC" tool on the App Store, which uses hand-drawn shapes to create layouts, or Landay's SILK tool for sketching UI layouts that can actually be run.
- Creation of a toolkit and associated interaction techniques for two-handed input in games. This might allow, for example, two mice to be used at the same time with independent cursors. You'd also have to use this system to come up with some interesting interaction techniques such as components that respond to multiple inputs, applications in collaboration or competition, etc.
- A constraint layout management framework, perhaps mimicking some of the features in
iOS AutoLayout, and implementing the Cassowary algorithm.
- A system for "remoting" a user interface. In other words, a way to have an application running on one platform and display its UI on another system across the network. This could involve retargetting the UI (translating it so that it works better on a PDA or phone, for example), or could deal with issues such as latency or possible disconnection.
- Speech UI toolkit, allowing an arbitrary, existing Swing-based applications to be controlled using speech, and with speech output. This could include techniques such as "barge-in" (allowing speech synthesis when user speaks), SpeechActs-style interaction, visual cues, etc.
- Design and implement one or more new interaction techniques for accomplishing some interactive task that's not well supported by the current "standard" techniques (like scrollbars and buttons). Be sure to pay attention to affordances, feedback, and mechanics. (Note that you will need to be careful with a project like this to ensure that it's not too small.)
- Create a new UI toolkit for use on Android phones.
- Develop a set of interaction techniques that use multitouch (or, pen+touch) to enable some cool functionality in a new application. See the papers on Pen+Touch=New Tools, LiquidText, or Neat Layout Gestures to spur some thinking.
- Create a system that automatically creates live web front-ends to Swing UIs. You might create a special "container" class into which you put arbitrary applicatons; the container would intercept Swing drawing requests and turn them into web output, and would also take input on the web site and use it to drive the application.
- Create Debugging Lenses for Swing, a la the Hudson and Smith paper. Or, alternatively, you might explore other debugging/development tools that tap into Swing to allow you to expose and manipulate thie hidden GUI structure and debugging information in context.
- Create a WhyLine for Swing, a la the Myers paper.
- Use the Force Touch feature on recent Apple devices to explore new haptic feedback techniques, as well as the on-screen affordances that would couple with them.
- Create a set of alternative input methods using accelerometers to control Zooming User Interfaces.
- Create a set of tools for debugging complex constraint-based layouts.
- Create a set of input devices using piezo-resistive foam and use these to develop a range of interaction techniques for a particular domain (for example, music playing).
- Develop a set of extensions to the desktop interface, such as in the videos we saw earlier in the class (such as in the Bumptop videos or the work of Michel Beaudouin-Lafon ). (You could do these inside a Swing DesktopPane rather than having to write platform-native code.)
- Animation toolkit for Swing, supporting various techniques discussed in class (slow-in, deformation of objects, etc.), and effects such as movement, fades, opacity changes, etc.
- Develop a set of interaction techniques, and supporting interaction hardware, for 3D user interfaces. (See a nice example video from a previous class here)
- Develop a framework for animating Java Swing components that's analogous to the Core Animation features in iOS and Mac OS X.
- Develop a flexible Java container component for doing fancy management of collections, a la the iOS 6 Collection View.
- Implement one of the non-graph-based constraint algorithms and create a Java layout framework using it (for example, based on Cassowary,, Alan Borning's constraint solver based on the SIMPLEX algorithm).
- Develop a hybrid paper/electronic user interface, such as in the Paper PDA project.
- Create an audio-only PDA for notetaking and other applications, using your knowledge of good speech and non-speech audio principles.
- Basic Pie menus are not a suitable project--they're too easy. It may be possible to do some enhancments (such as adding hierarchy, or other features) to create a suitable pie menu project. This is a high bar, though, so I'd suggest steering clear of these.
- It's extremely hard to do a good speech-based UI project that's of sufficient complexity and interactivity. If you're model is that you're just speaking command words to control something on screen, it's likely not suitable. For one thing, common speech toolkits already handle all of this for you. For another thing, the interaction is very weak. Check out the SpeechActs paper for ideas on how to do something richer.
- A good gesture-based project needs to go beyond just basic gesture recognition. If you're integrating the SiGeR-style recognizer (or some other simple recognizer) with an accelerometer to detect gestures, this is probably not sufficient. Some suggestions for how to make something like this a good project: 1) consider why you're using gestures. Is it a compelling usage scenario? Could what you're doing with gestures be more easily and conveniently accomplished using some other interaction? (e.g., if you're waving your phone around to change the volume, it's probably a sign there may be other, simpler ways to accomplish this interactive task). 2) Think about what information gestures might convey other than just simple commands. In other words, what's the dimensionality of gestures? Can they be used to convey some parameterized value? Can combinations of hands be used in an interesting way, reminiscent of the unique properties of multitouch?