Authors:
-Julia Schwarz - PhD student at CMU working in the dev lab
-Scott E. Hudson - Professor of HCI at CMU
-Jennifer Mankoff - Associate Professor at the HCII at CMU
-Andrew D. Wilson - SR at Microsoft Research
Presentation Venue:
UIST’10, October 3–6, 2010, New York, New York, USA
Summary:
The authors present that there is a lot of uncertainty in today's input methods such as the use of pen/touch. They also assert that there has not been much development in this area even though these uncertainties exist. This paper presents a way to handling input with uncertainty in a "systematic, extensible, and easy to manipulate fashion". "Six demonstrations which include tiny buttons that are manipulable using touch input, a text box that can handle multiple interpretations of spoken input, a scrollbar that can respond to inexactly placed input, and buttons which are easier to click for people with motor impairments". In their tests they show how straight forward it would be to apply the tools they developed into today's software. The authors explain how the different stages process uncertain inputs. The authors then describe how the framework works in each of the different stages. The framework they developed is based on probability calculations. They perform tests of ambiguous inputs and describe how each of these inputs is handled by the framework.
Discussion:
It is an interesting idea that input can be uncertain. I had never thought about input being uncertain before reading this paper. I can see why the authors think that this framework would be very useful to modern day applications. Imagine our touch-screen devices with this kind of technology. It would be interesting to see this system deployed on all platforms uniformly. It would bring some level of definition to all platforms and give users a general idea of what to expect when using different platforms.
CHI Tea
Wednesday, September 7, 2011
Blog #4 Gestalt
Authors:
-Kayur Patel - PhD Student at Washington University
-Naomi Bancroft - currently working for Google
-Steven M. Drucker - PR at Microsoft Research
-Andrew J. Ko - Assistant professor at the University of Washington
-James A. Landay - Profesor of CSCE at the University of Washington
-James Fogarty - Assistant professor at the University of Washington
Presentation Venue:
UIST’10, October 3–6, 2010, New York, New York, USA.
Summary:
The authors hypothesized that programmers like to design programs to behave in a certain way, but that current machine learning systems must be taught behaviors.
The authors tested their hypothesis by creating a developing environment for their test subjects develop in. The testers were given API's that they could use all of Gestalt's visualizations. The baseline test and Gestalt used the same data structures to hold all of the data that each of the users wanted to use. However the baseline test the users had to write their own code to connect data, attributes, and classifications. The users were told that the solutions given to them had 5 bugs, and they must find them. The test subjects all prefered Gestalt to the baseline. They found it much easier to locate and fix the bugs using Gestalt.
Discussion:
The authors research was very well done and was methodically planned. I liked how quickly the test subjects were able to find bugs in when using Gestalt vs the basline. I know how frustrating it is to work with machine learning algorithms. It is very difficult to troubleshoot them when something is going wrong. I would have like to try out Gestalt and see exactly how it works.
Tuesday, September 6, 2011
Blog #3 Pen + Touch = New Tools
Paper Title:
Pen + Touch = New Tools
Paper Authors:
Ken Hinckley, Koji Yatani, Michel Pahud, Nicole Coddington, Jenny Rodenhouse, Andy Wilson, Hrvoje Benko, Bill Buxton
Author Bios:
-Ken Hinckley is a Principal Researcher at Microsoft Research. His research is to enhance input vocabulary and incorporate the use of computational devices and user interfaces.
-Koji Yatani received his PhD from the University of Toronto. He is now part of the Human-Computer Interaction group at Microsoft Research Asia in Beijing.
-Michel Pahud received his PhD in parallel computing from the Swiss Federal Institute of Technology. He now works at Microsoft Resarch.
-Nicole Coddington is a Senior Interaction Designer at HTC.
-Jenny Rodenhouse is a Designer working for Microsoft in Seattle. Her current position is an Experience Designer II at the Xbox Interactive Entertainment Division in Microsoft.
-Andy Wilson is a senior researcher at Microsoft.
-Hrvoje Benko is a researcher at the Adaptive Systems and Interaction Group at Microsoft Research.
-Bill Buxton is a Principal Researcher at Microsoft Research.
Presentation Venue:
This paper was presented at UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology.
Hypothesis:
“The pen writes, touch manipulates, and the combination of pen + touch yields new tools”
The authors had an observational study that included eight people. They gathered feedback from an aditional 11 test users.
The results showed that most users prefered a division of the labor between pen and touch. However the users provided feedback that indicated they would like to use each interchangabely.
Summary:
The authors identify nine key design considerations of pen+touch. They also observe people’s use of physical paper and notebooks and factor that into their results. Earlier studies on this topic involved either the use of pen OR the use of touch. A new generation of digitizers is now emerging though which can differentiate between pen and touch and thus enable the use of pen AND touch.
The authors then discuss how things can be written, and manipulated on the touch interface. After describing these, they describe some specific operations that were performed by their test users such as stapling, cutting, tearing. They then received feedback from the test users.
Discussion:
The authors introduced a new technology and the concept of using the pen AND touch to provide users with new tools. The studies they performed were comprehensive. A lot of what went into desiging this system was the observation of the people using their notebooks. This gave the authors insight on how to design a system that would work for most idividuals.
Most touch screen tablets and mobile phones no longer use a "pen" Most now rely entirely on touch input. This might seem more convenient to certain users. This research is interesting because it attempts to bridge the gap between using just touch or just "pen" input. Sometimes while using "touch" devices I would like to use a pen, and visa versa.
Blog #2 Hands On Math
Paper title:
Hands-On Math: A page-based multi-touch and pen desktop for technical work and problem solving
Paper Authors:
Robert Zeleznik, Andrew Bragdon, Ferdi Adeputra, Hsu-Sheng Ko
Author Bios:
-Robert Zeleznik is the Director of Research at Brown University’s Computer Graphic Group.
-Andrew Bragdon is a second year PhD student at Brown University.
-Ferdi Adeputra studied Computer Science at Brown University and is now an analyst at Goldman Sachs.
-Hsu-Sheng Ko studied at Brown University.
Presentation Venue:
This paper was presented at the UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.
Summary:
Hands-On Math is a hybrid of a Computer Algebra System(CAS), and virtual paper. The authors wanted to combine a virtual paper system with the power of a standard CAS. To simplify the research and time involved only a high schools level of math was implemented. The authors main hypothesis was to see how the users efficiency improved using this system. To test the hypothesis the authors selected nine individuals to provide feedback on Hands-On Math for ways that they could improve it.
Hands-On Math attempts to incorporate CAS and touch surfaces to simulate paper. It runs on a Microsoft Surface and is equipped with an infrared light pen which is used to tell touch from pen input. Hands-On Math brings together the feeling and sharing ability of writing on a white-board, but at the same time preserving the pen/paper feeling.
This software relies heavily on gestures to do an assortment of functions. For example to create a new page one must use two fingers to swipe and create a new page. If one were only to use one finger it would simply scroll in the current area.
Discussion:
This technology is an attempt to bridge the gap between simple note taking and the powers of CAS. It will be a very powerful tool to both researchers and working people alike. You can use it to teach your children math, or build the next major city.The application of this software/hardware is amazing.
However as it always is with research there are some drawbacks. There is such a steep learning curve for this software because of the multitude of gestures that are involved. I think that in the future they should cut down the amount of gestures for beginners, and then allow users to enable more gestures as needed.
Thursday, September 1, 2011
Imaginary Interfaces
Sean Gustafson - Ph.D student @ Hasso Plattner Institut
Daniel Bierwirth - Masters Degree from Hasso Plattner Institut, now mobile computing consultant
Patrick Baudisch - Chair of Hasso Plattner Institut
Presented at UIST Comference in 2010
Hypothesis: To what extent do users' visuospatial memory could replace visual feedback, and what the success rate would be between different input methods.
They split the experiment into three different user tests. The first had the users draw different shapes and symbols. The second had them draw a figure, then point to different parts of it after they drew it. Lastly they had the users point to different coordinates in an imagnary 2D plane. In each of these tests they measured how accurate the users were at doing the tests and the length of time taken.
In the first test they had to draw different shapes repeatedly. The results of each user were compared to each other to see how accurate each person was at drawing shapes repeatedly in an imaginary space. They found that the test subjects preformed better when having a reference point, such as their non-dominate hand in the shape of an L.
The second test required the subjects to draw a line with different corners. Then the subjects had to point to different numbered corners. The test measured to what degree of accuracy the subjects pointed to the shape in the imaginary space.
The final test had the subjects map out a 2D coordinate system in imaginary space. They results showed to what accuracy the subjects were able to measure away from their fingers.
Discussion:
This paper was very relevant to today's technological movement. It would be interesting to see this technology implemented for use by everyday people. The authors of the paper mentioned that things can only get as small as the screens that display information. At some point the screens cannot get any smaller because they become useless. The only problem with this technology is that it would require a large amount of pre-training before a user could successfully use the device.
Daniel Bierwirth - Masters Degree from Hasso Plattner Institut, now mobile computing consultant
Patrick Baudisch - Chair of Hasso Plattner Institut
Presented at UIST Comference in 2010
Hypothesis: To what extent do users' visuospatial memory could replace visual feedback, and what the success rate would be between different input methods.
They split the experiment into three different user tests. The first had the users draw different shapes and symbols. The second had them draw a figure, then point to different parts of it after they drew it. Lastly they had the users point to different coordinates in an imagnary 2D plane. In each of these tests they measured how accurate the users were at doing the tests and the length of time taken.
In the first test they had to draw different shapes repeatedly. The results of each user were compared to each other to see how accurate each person was at drawing shapes repeatedly in an imaginary space. They found that the test subjects preformed better when having a reference point, such as their non-dominate hand in the shape of an L.
The second test required the subjects to draw a line with different corners. Then the subjects had to point to different numbered corners. The test measured to what degree of accuracy the subjects pointed to the shape in the imaginary space.
The final test had the subjects map out a 2D coordinate system in imaginary space. They results showed to what accuracy the subjects were able to measure away from their fingers.
Discussion:
This paper was very relevant to today's technological movement. It would be interesting to see this technology implemented for use by everyday people. The authors of the paper mentioned that things can only get as small as the screens that display information. At some point the screens cannot get any smaller because they become useless. The only problem with this technology is that it would require a large amount of pre-training before a user could successfully use the device.
On Computers
In the text about the Chinese Room Thought Experiment, it is clear to see What Dr. Searle is on about. We are presented with the problem, "What defines something as having true understanding rather than just interpreting?" It is easy to convey the appearance of understanding. However it is hard to truly understand. This is the main argument that Dr. Searle has against computers is will they ever be able to truly understand. To represent a true brain one must be able to not only draw conclusions about certain subjects and pieces of information. Computers must take that information and put it together and form new thoughts and ideas. Computers will never reach the capacity of brains because they are just running programs that interpret symbols. Computers also do understand, they simply compute.
If we are ever to create a true artificial intelligence, we must first find out what makes us, as humans, be aware of ourselves. Aristotle**'s On Plants gives us a unique insight into what makes something alive. Plants themselves seem to be alive. They require nourishment, and they do reproduce. But are they alive? Unlike animals plants do not have habits. They do not require sleep. Plants don't move, nor do they reproduce similarly to animals. Therefore we can conclude they are alive, but they do not have souls. Plants require certain conditions to live, however we find animals of the same species living in different environments.
All of this together shows us just how hard it will be for us to design something that is truly intelligent. One cannot just simply pass the Turing test to prove intelligence. You must be able to design a system that TRULY understands what it is doing, not just interpreting symbols. Once we approach this level of understanding then we not only have created an intelligent being, but also have learned what makes us human.
If we are ever to create a true artificial intelligence, we must first find out what makes us, as humans, be aware of ourselves. Aristotle**'s On Plants gives us a unique insight into what makes something alive. Plants themselves seem to be alive. They require nourishment, and they do reproduce. But are they alive? Unlike animals plants do not have habits. They do not require sleep. Plants don't move, nor do they reproduce similarly to animals. Therefore we can conclude they are alive, but they do not have souls. Plants require certain conditions to live, however we find animals of the same species living in different environments.
All of this together shows us just how hard it will be for us to design something that is truly intelligent. One cannot just simply pass the Turing test to prove intelligence. You must be able to design a system that TRULY understands what it is doing, not just interpreting symbols. Once we approach this level of understanding then we not only have created an intelligent being, but also have learned what makes us human.
Wednesday, August 31, 2011
Introduction
Sorry, don't have a good picture of myself. (Me in the center)
e-mail:bzmadura@gmail.com
Class standing
Senior (WHOOP!!!)
Why are you taking this class?
I have always been interested in process behind coming up with interfaces.
What experience do you bring to this class?
I have always enjoyed helping people learn how to use their devices. Therefore I like to know everything there is to know about a certain device. This gives me a huge knowledge base from which to pull while designing/implementing.
What do you expect to be doing in 10 years?
I expect to be working in a law firm somewhere as a patent/contract attorney.
What do you think will be the next biggest technological advancement in computer science?
I think it will be the use of cloud computing in all facets of computer use.
If you could travel back in time, who would you like to meet and why?
Richard Feynman, he was such an interesting man and had the ability to explain the most complex things to any person, regardless of their background.
Describe your favorite shoes and why they are your favorite?
My favorite shoes are a pair of Puma's that feel like they are massaging your feet with every step you take.
If you could be fluent in any foreign language that you're not already fluent in, which one would it be and why?
I would LOVE to be fluent in German. I love the country of Germany, its people, places, food, I would love to be able to converse with everyone I would meet there.
Interesting fact about yourself.
You name it, I probably can cook it.
Subscribe to:
Posts (Atom)