Comments
Comment1
Comment2
Reference
Detecting and Leveraging Finger Orientation for Interaction with Direct-Touch Surfaces
Feng Wang, Xiang Cao, Xiangshi Ren and Pourang Irani
UIST'09 October 4-7, 2009, Victoria, British Columbia, Canada
Summary
This paper talks about an algorithm that detects finger orientation in real-time and how this algorithm could benefit the development of applications that rely heavily in finger orientation. Users have adapted very well to multi-touch devices since it gives the user the freedom to manipulate the system without an intermediary. The authors believe knowing the finger orientation is an important piece of information. There are two types of finger touch: vertical touch (finger pointing directly downward) and oblique touch (finger touching the surface at an oblique angle. When a finger touches a surfaces it creates an elliptic shape. By examining the finger's deformation on the surface, useful information can be found such as the location of the user's palm and the direction in which the finger is pointing to. In order to test their algorithm, researchers used a tabletop surface based on FTIR technology.
Discussion
This paper was very boring in my humble opinion. I can't think of a specific application that could benefit from their research. It is interesting to read some of things they pointed out but I think the existing touch interaction is very accurate. I might be wrong but I just did not enjoy reading this paper.
Showing posts with label Paper Reading. Show all posts
Showing posts with label Paper Reading. Show all posts
Monday, April 25, 2011
Saturday, April 23, 2011
Paper Reading # 13
Comments
Comment1
Comment2
Reference
Mouse 2.0: Multi-touch Meets the Mouse
Nicolas Villar, Shahram Izadi, Dan Rosenfeld, Hrvoje Benko, John Helmes, Jonathan Westhues, Steve Hodges, Eyal Ofek, Alex Butler, Xiang Cao, Billy Chen
UIST 09 October 4-7, 2009 Victoria, British Columbia, Canada
Summary
This paper talks about innovative input devices that have computer mice capabilities combined with multi-touch capabilities. Even though multi-touch has been incorporated to mobile phones and tablets, desktops have yet to incorporate multi-touch input devices. The authors describe five different MT mice along with their benefits and limitations. The five prototypes are the FTIR, Orb Mouse, Cap Mouse, Side Mouse, and the Arty Mouse. The frustrated total internal inflection (FTIR) mouse is composed of an acrylic sheet, IR-Leds, optical sensors, and a camera. When fingers are pressed on the acrylic sheet an IR light is detected by the camera. The acrylic sheet is molded as a smooth arc. The optical sensor is used to located input displacement across the acrylic sheet. The Orb Mouse is composed of an IR-sensitive camera and an internal source of IR illumination. Illumination radiates away from the center of the device and is reflected back by objects. The main problem here is that the objects could be a user's hand, or a keyboard that is close by which is hard for the device to differentiate. Cap Mouse uses capacitive touch sensing similar to how keyboards work. When a user presses the mouse a change in capacitance in a specific area is determined. The benefit of this system is that it is not affected by illumination. Side Mouse detects movement in the surface in front of it instead of actual movement in the mouse. Finger movement is reflected as IR light back to the camera. The main benefit of this mouse is that the input area is not limited by the surface of the device. Arty Mouse is composed of a base where the palm of the hand rests, and two arms extend from it where the thumb and index fingers are placed.
Discussion
It seems incredible to me that even though we know that multi-touch has become the new norm of input, we have not yet implemented multi-touch mice. I do not know if we are waiting for the next generation of desktops to incorporate multi-touch mice, but we are definitely lacking on that part. Out of the five designs described in this paper I think I would like the FTIR mouse the best.
Comment1
Comment2
Reference
Mouse 2.0: Multi-touch Meets the Mouse
Nicolas Villar, Shahram Izadi, Dan Rosenfeld, Hrvoje Benko, John Helmes, Jonathan Westhues, Steve Hodges, Eyal Ofek, Alex Butler, Xiang Cao, Billy Chen
UIST 09 October 4-7, 2009 Victoria, British Columbia, Canada
Summary
This paper talks about innovative input devices that have computer mice capabilities combined with multi-touch capabilities. Even though multi-touch has been incorporated to mobile phones and tablets, desktops have yet to incorporate multi-touch input devices. The authors describe five different MT mice along with their benefits and limitations. The five prototypes are the FTIR, Orb Mouse, Cap Mouse, Side Mouse, and the Arty Mouse. The frustrated total internal inflection (FTIR) mouse is composed of an acrylic sheet, IR-Leds, optical sensors, and a camera. When fingers are pressed on the acrylic sheet an IR light is detected by the camera. The acrylic sheet is molded as a smooth arc. The optical sensor is used to located input displacement across the acrylic sheet. The Orb Mouse is composed of an IR-sensitive camera and an internal source of IR illumination. Illumination radiates away from the center of the device and is reflected back by objects. The main problem here is that the objects could be a user's hand, or a keyboard that is close by which is hard for the device to differentiate. Cap Mouse uses capacitive touch sensing similar to how keyboards work. When a user presses the mouse a change in capacitance in a specific area is determined. The benefit of this system is that it is not affected by illumination. Side Mouse detects movement in the surface in front of it instead of actual movement in the mouse. Finger movement is reflected as IR light back to the camera. The main benefit of this mouse is that the input area is not limited by the surface of the device. Arty Mouse is composed of a base where the palm of the hand rests, and two arms extend from it where the thumb and index fingers are placed.
Discussion
It seems incredible to me that even though we know that multi-touch has become the new norm of input, we have not yet implemented multi-touch mice. I do not know if we are waiting for the next generation of desktops to incorporate multi-touch mice, but we are definitely lacking on that part. Out of the five designs described in this paper I think I would like the FTIR mouse the best.
Paper Reading # 25
Comments
Comment1
Comment2
Reference
A code Reuse Interface for Non-Programmer Middle School Students
Paul A. Gross, Micah S. Herstand, Jordana W. Hodges, Caitlin L. Kelleher
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
This paper talks about a programming tool for middle school students with no programming experience to reuse code functionality. The authors believe that middle school is a very important learning stage where students usually determine if they will pursue math and science related careers in the future. It is extremely rare that schools offer courses in Computer Science in this stage of education. In order to try to awake more interest in Computer Science, they developed a programming tool that is integrated with a program called Looking Glass. The process of this tool is the following: users will find the code that corresponds to the desired functionality, users will then extract this code which will be called Actionscript, and finally users will integrate this code into a new program. The main goal is to allow middle school students to reuse snippets of code without them fully understanding how all of the code works. In order to facilitate this process, the code navigation is based on an observable output display. Look Glass is based on story telling and is designed to create animation stories. An experiment was conducted on an Exxon Mobil Summer Science Camp. At the end of the session, the students participated in a quiz where they were asked to identify the best description of code snippets. According to their results, about 98% of the participants were able to capture and reuse Actionscripts.
Discussion
How awesome would it have been if I had been exposed to something similar to this? My first experience with computer programming was in my second semester in college. I truly believe that programs like this could really attract more people to Computer Engineering/Science. It is a very interesting approach and there should be more of these programs trying to teach middle school students about future opportunities. Even though this experiment was triggering students to become more familiar with Computer Science, it could work with any other subjects as well.
Comment1
Comment2
Reference
A code Reuse Interface for Non-Programmer Middle School Students
Paul A. Gross, Micah S. Herstand, Jordana W. Hodges, Caitlin L. Kelleher
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
This paper talks about a programming tool for middle school students with no programming experience to reuse code functionality. The authors believe that middle school is a very important learning stage where students usually determine if they will pursue math and science related careers in the future. It is extremely rare that schools offer courses in Computer Science in this stage of education. In order to try to awake more interest in Computer Science, they developed a programming tool that is integrated with a program called Looking Glass. The process of this tool is the following: users will find the code that corresponds to the desired functionality, users will then extract this code which will be called Actionscript, and finally users will integrate this code into a new program. The main goal is to allow middle school students to reuse snippets of code without them fully understanding how all of the code works. In order to facilitate this process, the code navigation is based on an observable output display. Look Glass is based on story telling and is designed to create animation stories. An experiment was conducted on an Exxon Mobil Summer Science Camp. At the end of the session, the students participated in a quiz where they were asked to identify the best description of code snippets. According to their results, about 98% of the participants were able to capture and reuse Actionscripts.
Discussion
How awesome would it have been if I had been exposed to something similar to this? My first experience with computer programming was in my second semester in college. I truly believe that programs like this could really attract more people to Computer Engineering/Science. It is a very interesting approach and there should be more of these programs trying to teach middle school students about future opportunities. Even though this experiment was triggering students to become more familiar with Computer Science, it could work with any other subjects as well.
Thursday, April 21, 2011
Paper Reading # 24
Comments
Comment1
Comment2
Reference
Outline Wizard: Presentation Composition and Search
Lawrence Bergman, Jie Lu, Ravi Konuru, Julie MacNaught, Danny Yeh
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
This paper talks about creating presentations from existing presentations from common program such as Microsoft PowerPoint. There are no searching tools that can return single slides displaying the content a user is searching for. The authors propose a system called Outline Wizard which is an outline-based composition and search program. The main goal of Outline Wizard is to search from hierarchical structures to facilitate creating presentations and adding slides from existing presentation. Another important aspect of their system to achieve search that can find results that are relevant to a specific topic of information. Currently when you search for a specific keyword the user will find a single slide but other slides with relevant information on that keyword will not be retrieved in the search. Trying to find relevant information can be tedious and time consuming, so Outline Wizard stressed the importance of adding the capability of constructing presentation with a hierarchical structure.
Discussion
This paper introduces a very cool idea, I consider this paper one of those that can actually beneficial to users. I am not too familiar with PowerPoint's searching capabilities but I would think that idea would have been introduced earlier. Another thing that I noticed is that in the screen shots, they seem to have an older version of PowerPoint which would probably explain these proposed search capabilities were not present in PowerPoint yet.
Comment1
Comment2
Reference
Outline Wizard: Presentation Composition and Search
Lawrence Bergman, Jie Lu, Ravi Konuru, Julie MacNaught, Danny Yeh
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
This paper talks about creating presentations from existing presentations from common program such as Microsoft PowerPoint. There are no searching tools that can return single slides displaying the content a user is searching for. The authors propose a system called Outline Wizard which is an outline-based composition and search program. The main goal of Outline Wizard is to search from hierarchical structures to facilitate creating presentations and adding slides from existing presentation. Another important aspect of their system to achieve search that can find results that are relevant to a specific topic of information. Currently when you search for a specific keyword the user will find a single slide but other slides with relevant information on that keyword will not be retrieved in the search. Trying to find relevant information can be tedious and time consuming, so Outline Wizard stressed the importance of adding the capability of constructing presentation with a hierarchical structure.
Discussion
This paper introduces a very cool idea, I consider this paper one of those that can actually beneficial to users. I am not too familiar with PowerPoint's searching capabilities but I would think that idea would have been introduced earlier. Another thing that I noticed is that in the screen shots, they seem to have an older version of PowerPoint which would probably explain these proposed search capabilities were not present in PowerPoint yet.
Paper Reading # 23
Comments
Comment1
Comment2
Reference
Facilitating Exploratory Search by Model-Based Navigational Cues
Wai-Tat Fu, Thomas G. Kannampallil, and Ruogu Kang
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
This paper talks about social tagging and exploratory search. The simplest way to describe social tagging is associating labels or shortcuts to pieces of information to facilitate a search. Exploratory search is explained thoroughly in the paper and it is best described as ongoing search. Social tagging and exploratory search can generate navigational cues that facilitate knowledge exchange. The author's thesis states that different interaction methods will significantly impact the structuring, shaping, and behavior of human-computational systems. They claim that navigational cues create more intelligent interfaces. Social tags provide cues that facilitate information exploration as they help users predict content. These cues can help discover information in relevant topics. These methods focus in exploratory search and not in simple fact-retrieval searching. The main issue with navigational cues or creating tags for pieces of information is the vocabulary problem. This problem happens when different words are used to describe similar content, and as the number of tags increase they are incorrectly used to describe information. Even though this is an existing problem, authors have shown with their research that tags seem to converge over time and there is stability in tagging information. This could be explained by the fact that users seem to imitate how other users have created tags to keep the community consisted I would think.
Discussion
After reading multiple papers regarding research on navigational cues, I am starting to read very similar content. I have to say that I found this paper boring to some extent. The statistics in this paper completely lost me. One thing I read in a previous paper that could be very helpful to these researchers is the ability to create search communities which are created based on similar topic searching.
Comment1
Comment2
Reference
Facilitating Exploratory Search by Model-Based Navigational Cues
Wai-Tat Fu, Thomas G. Kannampallil, and Ruogu Kang
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
This paper talks about social tagging and exploratory search. The simplest way to describe social tagging is associating labels or shortcuts to pieces of information to facilitate a search. Exploratory search is explained thoroughly in the paper and it is best described as ongoing search. Social tagging and exploratory search can generate navigational cues that facilitate knowledge exchange. The author's thesis states that different interaction methods will significantly impact the structuring, shaping, and behavior of human-computational systems. They claim that navigational cues create more intelligent interfaces. Social tags provide cues that facilitate information exploration as they help users predict content. These cues can help discover information in relevant topics. These methods focus in exploratory search and not in simple fact-retrieval searching. The main issue with navigational cues or creating tags for pieces of information is the vocabulary problem. This problem happens when different words are used to describe similar content, and as the number of tags increase they are incorrectly used to describe information. Even though this is an existing problem, authors have shown with their research that tags seem to converge over time and there is stability in tagging information. This could be explained by the fact that users seem to imitate how other users have created tags to keep the community consisted I would think.
Discussion
After reading multiple papers regarding research on navigational cues, I am starting to read very similar content. I have to say that I found this paper boring to some extent. The statistics in this paper completely lost me. One thing I read in a previous paper that could be very helpful to these researchers is the ability to create search communities which are created based on similar topic searching.
Tuesday, April 19, 2011
Paper Reading # 22
Comments
Comment1
Comment2
Reference
DocuBrowse: Faceted Searching, Browsing, and Recommendations in an Enterprise Context
Andreas Girgensohn, Frank Shipman, Francine Chen, Lynn Wilcox
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
This paper talks about browsing and searching for documents in enterprise document repositories and the system the author propose called DocuBrowse. Their main focus is searching for information in a unstructured corporate document repository. The weaknesses of current enterprise document repositories are that the employees usually know where the document they are searching for is located. In a corporate document structure, information is created in such a way that documents are stored in files corresponding to projects or a certain department within the corporation. Finding this documents is easy when the user expects the document to be there. The authors propose a system where document searching resembles more of a web search engine where you can search for documents that you are unfamiliar with, there does not need to be a repository structure and most importantly a user does not need to expect a specific result. Browsing in DocuBrowse is similar to the famous directory trees. In order to propose an innovative interface design, the authors use a data-oriented document analysis similar to the search engines (Google). DocuBrowse supports faceted searching and also adds the capability of recommendations. This way users will be allowed to determine if their search was successful or if some properties of the document can lead them to the correct information.
Discussion
While working on my internship I became really frustrated sometimes when I was trying to search for information that I did not know where to search for it. The system we were using suffered from the same weaknesses this paper points out and I truly believe DocuBrowse could have been really helpful. It is no lie that current web search engines are extremely powerful and it is an interesting idea to add those capabilities to enterprise document repositories.
Comment1
Comment2
Reference
DocuBrowse: Faceted Searching, Browsing, and Recommendations in an Enterprise Context
Andreas Girgensohn, Frank Shipman, Francine Chen, Lynn Wilcox
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
This paper talks about browsing and searching for documents in enterprise document repositories and the system the author propose called DocuBrowse. Their main focus is searching for information in a unstructured corporate document repository. The weaknesses of current enterprise document repositories are that the employees usually know where the document they are searching for is located. In a corporate document structure, information is created in such a way that documents are stored in files corresponding to projects or a certain department within the corporation. Finding this documents is easy when the user expects the document to be there. The authors propose a system where document searching resembles more of a web search engine where you can search for documents that you are unfamiliar with, there does not need to be a repository structure and most importantly a user does not need to expect a specific result. Browsing in DocuBrowse is similar to the famous directory trees. In order to propose an innovative interface design, the authors use a data-oriented document analysis similar to the search engines (Google). DocuBrowse supports faceted searching and also adds the capability of recommendations. This way users will be allowed to determine if their search was successful or if some properties of the document can lead them to the correct information.
Discussion
While working on my internship I became really frustrated sometimes when I was trying to search for information that I did not know where to search for it. The system we were using suffered from the same weaknesses this paper points out and I truly believe DocuBrowse could have been really helpful. It is no lie that current web search engines are extremely powerful and it is an interesting idea to add those capabilities to enterprise document repositories.
Tuesday, April 12, 2011
Paper Reading # 21
Comments
Comment1
Comment2
Reference
Towards a Reputation-based Model of Social Web Search
Kevin McNally, Michael P. O'Mahony, Barry Smyth, Maurice Coyle, Peter Briggs
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
This paper talks about how the web search is usually seen by users as a solitary space. The authors are proposing a web search collaboration system in which additional features are added to already mainstream web searching sites as Google or Yahoo. More specific the authors designed HeyStaks which has been deployed online and according to them, HeyStacks has more than 500 users. The idea of this design is to capture and share search experiences with other users. They believe this will facilitate web search by creating searching communities. Users that become community members can benefit from recommendations from other community members. Within these search communities, there are search leaders and search followers. They are identified by how much information they share, how many communities they create, and how much they share with other community members. The biggest benefits of using HeyStaks are that users can still use their favorite search engine, and is a more collaborative search experience. HeyStaks allows you to create what are called staks which are similar to a folder where you can save search experiences. These staks can be shared with other users to facilitate their search. These will generate recommendations to users based on relevance of users that have tagged or shared search experiences. Staks can be private or public, and can limit which users use specific staks. Users can vote positively or negatively for search results. HeyStaks has two types of ranking which are primary promotions and secondary promotions. These rankings can impact the relevance of the recommendations.
Discussion
Overall I believe the idea of HeyStaks is a great collaborative tool, although I probably would not ever use it except when maybe doing research. This last point brings me to a good point, it could potentially benefit researchers. The whole idea is to be able to help other users who search for similar topics. Some of the capabilities of HeyStaks seem to be similar to the already existing bookmarks in web browsers. The benefit of HeyStaks is that you can share your staks with other members, and they can also contribute to different search communities.
Comment1
Comment2
Reference
Towards a Reputation-based Model of Social Web Search
Kevin McNally, Michael P. O'Mahony, Barry Smyth, Maurice Coyle, Peter Briggs
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
This paper talks about how the web search is usually seen by users as a solitary space. The authors are proposing a web search collaboration system in which additional features are added to already mainstream web searching sites as Google or Yahoo. More specific the authors designed HeyStaks which has been deployed online and according to them, HeyStacks has more than 500 users. The idea of this design is to capture and share search experiences with other users. They believe this will facilitate web search by creating searching communities. Users that become community members can benefit from recommendations from other community members. Within these search communities, there are search leaders and search followers. They are identified by how much information they share, how many communities they create, and how much they share with other community members. The biggest benefits of using HeyStaks are that users can still use their favorite search engine, and is a more collaborative search experience. HeyStaks allows you to create what are called staks which are similar to a folder where you can save search experiences. These staks can be shared with other users to facilitate their search. These will generate recommendations to users based on relevance of users that have tagged or shared search experiences. Staks can be private or public, and can limit which users use specific staks. Users can vote positively or negatively for search results. HeyStaks has two types of ranking which are primary promotions and secondary promotions. These rankings can impact the relevance of the recommendations.
Discussion
Overall I believe the idea of HeyStaks is a great collaborative tool, although I probably would not ever use it except when maybe doing research. This last point brings me to a good point, it could potentially benefit researchers. The whole idea is to be able to help other users who search for similar topics. Some of the capabilities of HeyStaks seem to be similar to the already existing bookmarks in web browsers. The benefit of HeyStaks is that you can share your staks with other members, and they can also contribute to different search communities.
Thursday, April 7, 2011
Paper Reading # 20
Comments
Comment1
Comment2
Reference
Lowering the barriers to website testing with CoTester
Jalal Mahmud, Tessa Lau
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
In this paper, Mahmud and Lau introduce a system called CoTester which primary goal is to decrease the difficulty of web applications testing. The authors express that the current tools for web testing require programming knowledge and maintenance which tends to be time-consuming. Their system CoTester is built on CoScripter platfrom and its main goal is web applications testing. CoScripter is a scripting language which supports assertions and does not require advanced programming skills to use. CoTester uses subroutines within a testing unit to improve test management and also tries to provide the tester a visual structure of the changes being made. With the use of these subroutines a change can be automatically made to similar instances of a certain process, for example changing the log in process of an application. In order to make testing easier, this system relies on a machine algorithm that identifies subroutines in testing scripts. Mahmud and Lau claim that their algorithm is capable of recognizing subroutines with 91% accuracy in a subset of seven subroutines. The recordings in testing scripts are recorded by actions in a web application. The recording can include objects present or not present, links, presence of text, and buttons clicked.
Discussion
The last two papers I read have dealt with website research. I found this paper to be extremely boring and although there are some interesting aspect such as the script subroutines, I do not agree with the author's ideas. First of all is someone is going to develop website applications, I would expect that person to have an advanced knowledge of computer programming. This system is trying to implement a scripting programming language that can be used by people who do not necessarily have programming knowledge. CoTester is trying to aim towards programming by display which I do not disagree with. However I still think someone with programming skills should be the one doing this. The accuracy of their algorithm is pretty high and might be useful in another application, but the main purpose of this paper to me is irrelevant to future web technology.
Comment1
Comment2
Reference
Lowering the barriers to website testing with CoTester
Jalal Mahmud, Tessa Lau
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
In this paper, Mahmud and Lau introduce a system called CoTester which primary goal is to decrease the difficulty of web applications testing. The authors express that the current tools for web testing require programming knowledge and maintenance which tends to be time-consuming. Their system CoTester is built on CoScripter platfrom and its main goal is web applications testing. CoScripter is a scripting language which supports assertions and does not require advanced programming skills to use. CoTester uses subroutines within a testing unit to improve test management and also tries to provide the tester a visual structure of the changes being made. With the use of these subroutines a change can be automatically made to similar instances of a certain process, for example changing the log in process of an application. In order to make testing easier, this system relies on a machine algorithm that identifies subroutines in testing scripts. Mahmud and Lau claim that their algorithm is capable of recognizing subroutines with 91% accuracy in a subset of seven subroutines. The recordings in testing scripts are recorded by actions in a web application. The recording can include objects present or not present, links, presence of text, and buttons clicked.
![]() |
| Figure 3 from paper |
Discussion
The last two papers I read have dealt with website research. I found this paper to be extremely boring and although there are some interesting aspect such as the script subroutines, I do not agree with the author's ideas. First of all is someone is going to develop website applications, I would expect that person to have an advanced knowledge of computer programming. This system is trying to implement a scripting programming language that can be used by people who do not necessarily have programming knowledge. CoTester is trying to aim towards programming by display which I do not disagree with. However I still think someone with programming skills should be the one doing this. The accuracy of their algorithm is pretty high and might be useful in another application, but the main purpose of this paper to me is irrelevant to future web technology.
Monday, April 4, 2011
Paper Reading # 19
Comments
Comment1
Comment2
Reference
WildThumb: A Web Browser Supporting Efficient Task Management on Wide Displays
Shenwei Liu, Keishi Tajima
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
This paper talks about the advantages and disadvantages of current web browsers. Liu and Tajima claim that none of the current web browsers provide enough support for managing multiple windows or tabs. Since users are spending more browsing the web, tab-browsers are the predominant web browsers. According to a cited previous work, users who use larger displays tend to open more tabs or windows. The research done by Liu and Tajima is focused specifically in wide displays. The main disadvantages of current web browsers are difficulty in page recognition, inefficient scan of tabs, difficulty selecting tabs using pointing devices, and inefficient page organization. The main difficulty when working with many tabs is that as you increase the number of opened tabs, the size of the tab becomes smaller and the site titles become indistinguishable. When many tabs are opened, it takes a longer time to scan the list to find the desired tab. One of the main disadvantages when using wide displays is that most sites have unused empty side margins. The authors propose a system where this unused space will be replaced by augmented thumbnails to solve the disadvantages mentioned previously. These thumbnails will be shown in the current focused page and will show the most relevant sites visited calculated by an algorithm which infers information from the history.
Discussion
I personally do not own a wide display computer, but my co-worker does and I can say he does open many tabs at once. I always ask him how he can keep up with so many, but he seems to perform well the way he browses the web. I think the augmented thumbnails are a great idea, but I do not know how accurate they could be. If there are two pages from the same site with no image, it would be extremely hard to differentiate.
Comment1
Comment2
Reference
WildThumb: A Web Browser Supporting Efficient Task Management on Wide Displays
Shenwei Liu, Keishi Tajima
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
This paper talks about the advantages and disadvantages of current web browsers. Liu and Tajima claim that none of the current web browsers provide enough support for managing multiple windows or tabs. Since users are spending more browsing the web, tab-browsers are the predominant web browsers. According to a cited previous work, users who use larger displays tend to open more tabs or windows. The research done by Liu and Tajima is focused specifically in wide displays. The main disadvantages of current web browsers are difficulty in page recognition, inefficient scan of tabs, difficulty selecting tabs using pointing devices, and inefficient page organization. The main difficulty when working with many tabs is that as you increase the number of opened tabs, the size of the tab becomes smaller and the site titles become indistinguishable. When many tabs are opened, it takes a longer time to scan the list to find the desired tab. One of the main disadvantages when using wide displays is that most sites have unused empty side margins. The authors propose a system where this unused space will be replaced by augmented thumbnails to solve the disadvantages mentioned previously. These thumbnails will be shown in the current focused page and will show the most relevant sites visited calculated by an algorithm which infers information from the history.
Discussion
I personally do not own a wide display computer, but my co-worker does and I can say he does open many tabs at once. I always ask him how he can keep up with so many, but he seems to perform well the way he browses the web. I think the augmented thumbnails are a great idea, but I do not know how accurate they could be. If there are two pages from the same site with no image, it would be extremely hard to differentiate.
Wednesday, March 30, 2011
Paper Reading # 18
Comments
Comment1
Comment2
Reference
Embedded Media Markers: Marks on Paper that Signify Associated Media
Qiong Liu, Chunyuan Liao, Lynn Wilcox, Anthony Dunnigan, Bee Liew
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
This paper talks about embedded media markers (EMMs) which are marks on printed documents that help relate certain text areas to associated media. EMMs serve a similar purpose as bar codes and hyperlinks. The advantage EMMs have is that unlike bar codes, EMMs do not affect the appearance of the document. Also unlike hyperlinks, EMMs are printed on a paper document. The authors claim that paper is the most widely device used for viewing information. It has many advantages such as portability, low cost, high resolution, etc. The disadvantage is that it cannot play video or audio, however cell phones are great for this purpose. The proposed design in this paper merges the use of paper and cell phones. The idea is to print iconic marks on paper documents corresponding to media which will then be captured by a cell phone. The purpose of the cell phone will be to retrieve information about the EMMs and play the associated media. Some of the previous work include Microsoft Tag, DataGlyphs, RFID, HotPaper, Mobile Retriever amongst many other applications. EMMs can be printed in low-resolution paper at a relatively low cost. EMMs should be visible to humans, should be meaningful, and should not affect the layout of the document. In order to achieve this there should be different icons to infer distinction between the associated media.
Discussion
The idea presented in this paper is awesome! There is only one thing that worries me, and that is that we are becoming very lazy. As we continue to develop new technology, we also become lazier. How awesome would it be to be reading an essay, take out your phone, scan a text area and retrieve the definition of a certain word you don't understand? This is the idea of the embedded media markers. The neat thing about EMMs is that they are low cost, and they do not change the document layout like bar codes do. This would be a neat thing to add in academia!
Comment1
Comment2
Reference
Embedded Media Markers: Marks on Paper that Signify Associated Media
Qiong Liu, Chunyuan Liao, Lynn Wilcox, Anthony Dunnigan, Bee Liew
IUI'10, February 7-10, 2010, Hong Kong, China
Summary
This paper talks about embedded media markers (EMMs) which are marks on printed documents that help relate certain text areas to associated media. EMMs serve a similar purpose as bar codes and hyperlinks. The advantage EMMs have is that unlike bar codes, EMMs do not affect the appearance of the document. Also unlike hyperlinks, EMMs are printed on a paper document. The authors claim that paper is the most widely device used for viewing information. It has many advantages such as portability, low cost, high resolution, etc. The disadvantage is that it cannot play video or audio, however cell phones are great for this purpose. The proposed design in this paper merges the use of paper and cell phones. The idea is to print iconic marks on paper documents corresponding to media which will then be captured by a cell phone. The purpose of the cell phone will be to retrieve information about the EMMs and play the associated media. Some of the previous work include Microsoft Tag, DataGlyphs, RFID, HotPaper, Mobile Retriever amongst many other applications. EMMs can be printed in low-resolution paper at a relatively low cost. EMMs should be visible to humans, should be meaningful, and should not affect the layout of the document. In order to achieve this there should be different icons to infer distinction between the associated media.
Discussion
The idea presented in this paper is awesome! There is only one thing that worries me, and that is that we are becoming very lazy. As we continue to develop new technology, we also become lazier. How awesome would it be to be reading an essay, take out your phone, scan a text area and retrieve the definition of a certain word you don't understand? This is the idea of the embedded media markers. The neat thing about EMMs is that they are low cost, and they do not change the document layout like bar codes do. This would be a neat thing to add in academia!
Tuesday, March 29, 2011
Paper Reading # 16
Comments
Comment1
Comment2
Reference
A Practical Pressure Sensitive Computer Keyboard
Paul H. Dietz, Benjamin Eidelson, Jonathan Westhues and Steven Batiche
UIST 09 October 4-7, 2009 Victoria, British Columbia, Canada
Summary
This paper talks about a pressure sensitive computer keyboard. The idea is to design a pressure keyboard that can be a little more expensive to mass produce but not to the extent where it would be impossible to sell. In the introduction they explain that even though computers have changed dramatically since their start, keyboards have not changed much. Prior work with pressure keyboards have been used for electronic music, improving text generation for disabled people, and for biometric user authentication. All of the uses mentioned before are not suitable for mass production due to its cost. The design proposed in this paper uses piezoresistive material instead of the current flexible membrane technology, and carbon screen printed ink. This design allows for power saving as it only uses power when a key has been pressed. The way this keyboard works is that it is a matrix of resistors connecting to a unique row-column pair. The authors claim this design overcomes the problem of "ghosting". Some of the applications that are possible with this type of keyboard are gaming, instant messaging, and general typing.
Discussion
Personally I had never wondered a keyboard works and well this paper definitely explains how modern keyboards work. What was interesting about reading this paper is questioning why we have not developed new keyboard technology? I believe there are many applications that can benefit from pressure sensing keyboards. The idea explained in this paper for general typing could be very beneficial. A slight tap on the backspace key and you can delete a letter, while a hard tap can delete a whole word.
Comment1
Comment2
Reference
A Practical Pressure Sensitive Computer Keyboard
Paul H. Dietz, Benjamin Eidelson, Jonathan Westhues and Steven Batiche
UIST 09 October 4-7, 2009 Victoria, British Columbia, Canada
Summary
This paper talks about a pressure sensitive computer keyboard. The idea is to design a pressure keyboard that can be a little more expensive to mass produce but not to the extent where it would be impossible to sell. In the introduction they explain that even though computers have changed dramatically since their start, keyboards have not changed much. Prior work with pressure keyboards have been used for electronic music, improving text generation for disabled people, and for biometric user authentication. All of the uses mentioned before are not suitable for mass production due to its cost. The design proposed in this paper uses piezoresistive material instead of the current flexible membrane technology, and carbon screen printed ink. This design allows for power saving as it only uses power when a key has been pressed. The way this keyboard works is that it is a matrix of resistors connecting to a unique row-column pair. The authors claim this design overcomes the problem of "ghosting". Some of the applications that are possible with this type of keyboard are gaming, instant messaging, and general typing.
Discussion
Personally I had never wondered a keyboard works and well this paper definitely explains how modern keyboards work. What was interesting about reading this paper is questioning why we have not developed new keyboard technology? I believe there are many applications that can benefit from pressure sensing keyboards. The idea explained in this paper for general typing could be very beneficial. A slight tap on the backspace key and you can delete a letter, while a hard tap can delete a whole word.
Paper Reading # 17
Comments
Comment 1
Comment 2
Reference
Estimating User's Engagement from Eye-gaze Behaviors in Human-Agent Conversations
IUI'10 February 7-10,2010 Hong Kong, China
Summary
This paper talks about different strategies to see if a person is engaged in face-to-face conversations, and how people interact with each other through the use of technology. The main goal of the system described in this paper is to determine if the user is fully engaged in a conversation. The most complicated issue to resolve in their research is to make sure that the system must perceive nonverbal behavior such as facial gestures, body movements, and other key movements that could infer a person is not fully engaged in a conversation. According to their research there are many eye-tracking systems that are very stable, and therefore can be used in their complex system. The system or the agent-as the authors refer to it as- must be capable of reengaging a person if the person seems to have lost the full engagement in a conversation. The agent must do this by changing the topic when some eye-gazes infer the user is bored during a dialogue. In order to analyze their system the authors describe a Wizard-of-Oz approach to collect data. The scenario of the experiment is a salesperson in a mobile phone store. The experiment is set up such that there is a person called the user located in a room looking a screen where the salesperson is being projected. Outside of the room, there is another person called the observer. The observer can look at the user through a one way window. Both of these individuals will have a button to press. The instructions are different for both individuals. The user must press the button when he believes the salesperson description to be boring. The observer will press the button when he believes the user is bored by the salesperson's description. The analysis used in this experiment is called 3-grams. The results observed by the authors show that selecting the agent's behaviors according to engagement estimation turned out to be effective in this agent-human interaction.
Discussion
This paper is really interesting because it could be implemented in academia. It would be extremely beneficial for a teacher to know when their audience is not being engaged by his/her lectures. The Wizard-of-Oz approach in this interaction showed great results in the agent-human interaction. This type of research can have great benefits for people who are always presenting, and can provide feedback to improve their presentation skills.
Comment 1
Comment 2
Reference
Estimating User's Engagement from Eye-gaze Behaviors in Human-Agent Conversations
IUI'10 February 7-10,2010 Hong Kong, China
Summary
This paper talks about different strategies to see if a person is engaged in face-to-face conversations, and how people interact with each other through the use of technology. The main goal of the system described in this paper is to determine if the user is fully engaged in a conversation. The most complicated issue to resolve in their research is to make sure that the system must perceive nonverbal behavior such as facial gestures, body movements, and other key movements that could infer a person is not fully engaged in a conversation. According to their research there are many eye-tracking systems that are very stable, and therefore can be used in their complex system. The system or the agent-as the authors refer to it as- must be capable of reengaging a person if the person seems to have lost the full engagement in a conversation. The agent must do this by changing the topic when some eye-gazes infer the user is bored during a dialogue. In order to analyze their system the authors describe a Wizard-of-Oz approach to collect data. The scenario of the experiment is a salesperson in a mobile phone store. The experiment is set up such that there is a person called the user located in a room looking a screen where the salesperson is being projected. Outside of the room, there is another person called the observer. The observer can look at the user through a one way window. Both of these individuals will have a button to press. The instructions are different for both individuals. The user must press the button when he believes the salesperson description to be boring. The observer will press the button when he believes the user is bored by the salesperson's description. The analysis used in this experiment is called 3-grams. The results observed by the authors show that selecting the agent's behaviors according to engagement estimation turned out to be effective in this agent-human interaction.
Discussion
This paper is really interesting because it could be implemented in academia. It would be extremely beneficial for a teacher to know when their audience is not being engaged by his/her lectures. The Wizard-of-Oz approach in this interaction showed great results in the agent-human interaction. This type of research can have great benefits for people who are always presenting, and can provide feedback to improve their presentation skills.
Tuesday, March 8, 2011
Paper Reading # 14
Reference:
PhotoelasticTouch: Transparent Rubbery Tangible Interface on an LCD and Photoelasticity
October 4-7, 2009 Victoria, British Columbia, Canada
Toshiki Sato, Haruko Mamiya, Hideki Koike, Kentaro Fukuchi
Summary
This paper talks about a system called PhotoelasticTouch that is tabletop system that is touch-based interaction. Three applications are described in this paper: a touch panel, a tangible face application, and a paint application. The main concern these researchers have is that current touch-based systems have rigid surfaces and therefore lack tactile expresiveness. The paper proposes elements such as no equipment that limits natural movement, flexible surface, surface should not block the image of the display, and interaction such include touching, pinching, kneading or pulling. They also describe other elements being proposed by other researchers in their systems. Most of these relied in IR camera and did not have tactile feedback in most. Photoelastic Touch is made up of an LCD, high speed camera, polarizing filters, and a transparent elastic body made of polyethylene or silicon rubber. The way PhotoelasticTouch works is by capturing deformations on the elastic polyethylene. The camera detects linear and circular polarized light from the deformations. The changes in pressure allows PhotoelastichTouch to infer direction without having to slide your finger. The three applications described are the Pressure-Sensitive Touch Panel, Tangible Face, and Paint Application. The touch panel senses multiple touches and also their perspective pressure values. With this application you can also rotate objects without sliding your finger. The tangible face is an application where you can deform a face image by applying different pressures with your fingers. The paint application is used by the user applying more pressure to draw thicker lines, and low pressure to draw thinner lines. In addition, the user can create elastic shapes such as stars, hearts, and circles to draw with.
Discussion
It is interesting to see that people are researching surfaces that allow pressures to be identified, and give the user more options rather than the now classic rigid surfaces. Graduate students were asked to use the elastic surface and most of them seemed to like it. However, we have to look into what type of applications a soft surface can be beneficial for.
PhotoelasticTouch: Transparent Rubbery Tangible Interface on an LCD and Photoelasticity
October 4-7, 2009 Victoria, British Columbia, Canada
Toshiki Sato, Haruko Mamiya, Hideki Koike, Kentaro Fukuchi
Summary
This paper talks about a system called PhotoelasticTouch that is tabletop system that is touch-based interaction. Three applications are described in this paper: a touch panel, a tangible face application, and a paint application. The main concern these researchers have is that current touch-based systems have rigid surfaces and therefore lack tactile expresiveness. The paper proposes elements such as no equipment that limits natural movement, flexible surface, surface should not block the image of the display, and interaction such include touching, pinching, kneading or pulling. They also describe other elements being proposed by other researchers in their systems. Most of these relied in IR camera and did not have tactile feedback in most. Photoelastic Touch is made up of an LCD, high speed camera, polarizing filters, and a transparent elastic body made of polyethylene or silicon rubber. The way PhotoelasticTouch works is by capturing deformations on the elastic polyethylene. The camera detects linear and circular polarized light from the deformations. The changes in pressure allows PhotoelastichTouch to infer direction without having to slide your finger. The three applications described are the Pressure-Sensitive Touch Panel, Tangible Face, and Paint Application. The touch panel senses multiple touches and also their perspective pressure values. With this application you can also rotate objects without sliding your finger. The tangible face is an application where you can deform a face image by applying different pressures with your fingers. The paint application is used by the user applying more pressure to draw thicker lines, and low pressure to draw thinner lines. In addition, the user can create elastic shapes such as stars, hearts, and circles to draw with.
Discussion
It is interesting to see that people are researching surfaces that allow pressures to be identified, and give the user more options rather than the now classic rigid surfaces. Graduate students were asked to use the elastic surface and most of them seemed to like it. However, we have to look into what type of applications a soft surface can be beneficial for.
Thursday, February 24, 2011
Paper Reading #11 - Contact Area Interaction with Sliding Widgets
Comments
Stephen Morrow
Miguel Cardenas
Reference
Contact Area Interaction with Sliding Widgets
Tomer Moscovich
UIST October 4-7,2009 Victoria, British Columbia, Canada
Summary
At this point, it is eminent that touchscreen systems have started replacing the classic cursor based systems. A concrete example is the design of cell phones or now called smart phones. This paper talks about a proposed touchscreen widgets design that aims at solving some of the problems that users encounter with this shift to the touchscreen era. The current touchscreen widgets were developed for mouse or cursor based systems, and therefore users have encountered multiple problems with their input interactions. One of the main issues with touchscreen systems is the problem known as the fat finger problem, as well as the selection of multiple objects with a single touch. Moscovich presents an innovative idea to solve these problems. He claims the main problem is that current systems are designed to interact to the one-pixel selection point model. The proposed solution is to used an interaction based on area selection and sliding. This can resolve the ambiguity of which target the user wants to select. There is no need for hardware update to change this interaction. According to Moscovich, approximating the area selection with a circle instead of the one-pixel selection point works much better.
Discussion
This paper was very interesting to me because the last paper reading dealt with a similar issue, which is the fat finger problem. Although in this paper, the idea for fixing that problem is much more concise. I have a Samsung Fascinate smart phone that has an Android operating system, and it is interesting to see that some of the solutions proposed in this paper are actually present in my phone. The sliding mechanism is used in my phone for incoming calls, and when I want to unlock my screen. Pictures are shown to the right to depict this.
Stephen Morrow
Miguel Cardenas
Reference
Contact Area Interaction with Sliding Widgets
Tomer Moscovich
UIST October 4-7,2009 Victoria, British Columbia, Canada
Summary
At this point, it is eminent that touchscreen systems have started replacing the classic cursor based systems. A concrete example is the design of cell phones or now called smart phones. This paper talks about a proposed touchscreen widgets design that aims at solving some of the problems that users encounter with this shift to the touchscreen era. The current touchscreen widgets were developed for mouse or cursor based systems, and therefore users have encountered multiple problems with their input interactions. One of the main issues with touchscreen systems is the problem known as the fat finger problem, as well as the selection of multiple objects with a single touch. Moscovich presents an innovative idea to solve these problems. He claims the main problem is that current systems are designed to interact to the one-pixel selection point model. The proposed solution is to used an interaction based on area selection and sliding. This can resolve the ambiguity of which target the user wants to select. There is no need for hardware update to change this interaction. According to Moscovich, approximating the area selection with a circle instead of the one-pixel selection point works much better.
Discussion
This paper was very interesting to me because the last paper reading dealt with a similar issue, which is the fat finger problem. Although in this paper, the idea for fixing that problem is much more concise. I have a Samsung Fascinate smart phone that has an Android operating system, and it is interesting to see that some of the solutions proposed in this paper are actually present in my phone. The sliding mechanism is used in my phone for incoming calls, and when I want to unlock my screen. Pictures are shown to the right to depict this.Tuesday, February 22, 2011
Paper Reading # 10 - Ripples: Utilizing Per-Contact Visualization to Improve User Interaction with Touch Displays
Reference
Daniel Wigdor, Sarah Williams, Michael Cronin, Robert Levy, Katie White, Maxim Mazeev, Hrvoje Benko.
UIST October 4-7, 2009, Victoria, British Columbia, Canada.
Summary:
This paper presents a system called Ripples which deals with visualizations in direct-touch display systems. The main purpose of Ripples is to help with the feedback problem in touch displays. Even though this is a known issue, not much attention has been devoted to this type of research. The authors believe the so-called "fat finger problem" created frustration in users since it lacks a way to attribute unexpected results to their actual causes. One of the main problems with the fat finger is missing targets. Another key issue is that depending on the hardware of the application, a user may not feel a feedback on a click.
Discussion:
With systems moving quite fast to touch displays, I believe this area of research has an enormous field of study. It will be interesting what kind of discoveries will be made that can help us develop better applications for touch screen tv's, smartphones, tablets, computers, etc. In particular, the focus of this paper is interesting because when a click does not perform the action it is supposed to the user typically gets frustrated. Therefore, a feedback mechanism that allows you to know if an action has been taken sounds great for debugging know issues.
Daniel Wigdor, Sarah Williams, Michael Cronin, Robert Levy, Katie White, Maxim Mazeev, Hrvoje Benko.
UIST October 4-7, 2009, Victoria, British Columbia, Canada.
Summary:
This paper presents a system called Ripples which deals with visualizations in direct-touch display systems. The main purpose of Ripples is to help with the feedback problem in touch displays. Even though this is a known issue, not much attention has been devoted to this type of research. The authors believe the so-called "fat finger problem" created frustration in users since it lacks a way to attribute unexpected results to their actual causes. One of the main problems with the fat finger is missing targets. Another key issue is that depending on the hardware of the application, a user may not feel a feedback on a click.
Discussion:
With systems moving quite fast to touch displays, I believe this area of research has an enormous field of study. It will be interesting what kind of discoveries will be made that can help us develop better applications for touch screen tv's, smartphones, tablets, computers, etc. In particular, the focus of this paper is interesting because when a click does not perform the action it is supposed to the user typically gets frustrated. Therefore, a feedback mechanism that allows you to know if an action has been taken sounds great for debugging know issues.
Thursday, February 17, 2011
Paper Reading # 9 - VizWiz: Nearly Real-time Answers to Visual Questions
Reference
Jeffrey P. Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller.
UIST'10 Octobe 3-6, 2010, New York City, USA.
Summary
This paper describes some of the barriers that blind people face and some possible solutions for aid in every day activities. Current technology to aid blind people is error-prone and most of the time is also extremely expensive. The solution described in this paper is a mobile application that offers nearly real-time responses to any questions blind people may have. The application is called Viz Wiz and is available for the iPhone platform. In order to make VizWiz's answers to approximate real time, the quikTurkit approach aims at recruiting workers to be available as soon as the questions arrive. One of the main issues described by this paper is the lack of access to virtual information such as nutrition information written on a can. Another problem with existing applications is that an automated response might not answers questions such as "What is the cheapest hamburger in the menu?". The answer could be limited to letting you know the prices of every single item on the menu. The emphasis on VizWiz is that human workers are generally a better support tool than automated messages because they can use intelligence on common sense issues that a software program does not have. At the beginning of this research a survey was constructed to get feedback from blind people on VizWiz.
Discussion
The approach of this application can be very handy for blind people. I personally do not have any relatives or friends with this problem, but if I imagine myself closing my eyes and asking for questions, human support would be my preference. The idea of this application being near real-time feedback is also extremely important in aiding blind people. Even though at the beginning of the paper, the authors mention that current applications are error-prone, VizWiz could also fail at any given time. No software application is perfect, but this seems to be heading in the right direction.
Jeffrey P. Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller.
UIST'10 Octobe 3-6, 2010, New York City, USA.
Summary
This paper describes some of the barriers that blind people face and some possible solutions for aid in every day activities. Current technology to aid blind people is error-prone and most of the time is also extremely expensive. The solution described in this paper is a mobile application that offers nearly real-time responses to any questions blind people may have. The application is called Viz Wiz and is available for the iPhone platform. In order to make VizWiz's answers to approximate real time, the quikTurkit approach aims at recruiting workers to be available as soon as the questions arrive. One of the main issues described by this paper is the lack of access to virtual information such as nutrition information written on a can. Another problem with existing applications is that an automated response might not answers questions such as "What is the cheapest hamburger in the menu?". The answer could be limited to letting you know the prices of every single item on the menu. The emphasis on VizWiz is that human workers are generally a better support tool than automated messages because they can use intelligence on common sense issues that a software program does not have. At the beginning of this research a survey was constructed to get feedback from blind people on VizWiz.
Discussion
The approach of this application can be very handy for blind people. I personally do not have any relatives or friends with this problem, but if I imagine myself closing my eyes and asking for questions, human support would be my preference. The idea of this application being near real-time feedback is also extremely important in aiding blind people. Even though at the beginning of the paper, the authors mention that current applications are error-prone, VizWiz could also fail at any given time. No software application is perfect, but this seems to be heading in the right direction.
Thursday, February 10, 2011
Paper Reading # 7 - Grassroots Heritage in the Crisis Context: A Social Media Probes Approach to Studying Heritage in a Participatory Age
Reference
Sophia B. Lu
April 10-15, 2010, Atlanta, GA
Summary
This paper talks about how social media technologies have taken a bigger role in understanding cultural heritage. Historic events produce memories that are valuable to remember and share with others in future generations. As technology evolves, people are finding new ways to capture and share memories about everyday life. The author points out that she focuses on social and cultural significance. Lu uses a HCI design she calls "social media probes" to improve engagement of participants.
Discussion
This paper is probably the most boring paper I have read in this class. Throughout the paper, the author Lu seems to go back to the same points. Lu points out that the way people are capturing memories through social media affects the heritage, and she is proposing ways to improve this. As I was reading this I kept telling myself, "Oh yeah!, Facebook is already capable of that". Then I thought well maybe her research was before Facebook's popularity exploded but nope her research was amidst Facebook's growing popularity.
Sophia B. Lu
April 10-15, 2010, Atlanta, GA
Summary
This paper talks about how social media technologies have taken a bigger role in understanding cultural heritage. Historic events produce memories that are valuable to remember and share with others in future generations. As technology evolves, people are finding new ways to capture and share memories about everyday life. The author points out that she focuses on social and cultural significance. Lu uses a HCI design she calls "social media probes" to improve engagement of participants.
Discussion
This paper is probably the most boring paper I have read in this class. Throughout the paper, the author Lu seems to go back to the same points. Lu points out that the way people are capturing memories through social media affects the heritage, and she is proposing ways to improve this. As I was reading this I kept telling myself, "Oh yeah!, Facebook is already capable of that". Then I thought well maybe her research was before Facebook's popularity exploded but nope her research was amidst Facebook's growing popularity.
Tuesday, February 8, 2011
Paper Reading # 6 - Critical Point, A Composition for Cello and Computer
Reference
Roger Dannenberg, Tomas Laurenzo
CHI 2010 April 10-15, 2010, Atlanta, Georgia.
Summary
This paper describes a software called Critical Point which is used for real-time music compositions, in this case cello composers. This work intends to provide cello performers with sound extensions. This program allows the cello performer to express control of his sound textures. There are algorithms that allow delays and pitch shifts. In addition to Critical Point, the performance also includes animations and videos.
Discussion
Even though I have an appreciation for music, I have never practiced playing an instrument. This paper is interesting because I was able to learn a few things that non-musicians take for granted when listening to music. Some of these key issues include the limitation of sounds each instrument can create, the quality of the sound and its textures.
Roger Dannenberg, Tomas Laurenzo
CHI 2010 April 10-15, 2010, Atlanta, Georgia.
Summary
This paper describes a software called Critical Point which is used for real-time music compositions, in this case cello composers. This work intends to provide cello performers with sound extensions. This program allows the cello performer to express control of his sound textures. There are algorithms that allow delays and pitch shifts. In addition to Critical Point, the performance also includes animations and videos. Discussion
Even though I have an appreciation for music, I have never practiced playing an instrument. This paper is interesting because I was able to learn a few things that non-musicians take for granted when listening to music. Some of these key issues include the limitation of sounds each instrument can create, the quality of the sound and its textures.
Tuesday, February 1, 2011
Paper Reading # 5 - Exploring the Design Space in Technology - Augmented Dance
Reference
Celine Latulipe, David Wilson, Sybil Huskey, Melissa Word, Arthur Carroll, Erin Carroll, Berto Gonzalez, Vikash Singh, Mike Wirth, Danielle Lottridge
CHI 2010 April 10-15, 2010, Atlanta, GA
Summary
This paper describes a project called Dance.Draw that focuses on research to integrate dance performances with technology. The main goal of the project is to enhance the audience interactions with the dance and its visualizations. The paper claims that this type of technology integration is not new, and several other computer scientists have studied this interaction.The Dance.Draw project began as a performance in January 2008 where the performers dance with gyroscopic mice held in their hands. After the showing, they quickly learned that this technique did not allow the performers to do any type of movement with hand support. Later that year, the next performance experimented with only a subset of the performers holding the mice and including noticeable choreographic movements when the performers passed the mice to each other. This second performance returned great feedback about the audience interaction with the visualizations. The most recent performance was staged at CHI 2010 where the performers used wireless sensors for visualizations.
Discussion
This is the second article that I have read that involves research by Celine Latulipe. It is very interesting to see how she deals where she is trying to integrate technology with some type of art. To be more specific, in the two papers she deals with digital images and dance performance. I like the fact that she emphasizes that even though technology is involved, it cannot be the center piece of the art exhibition. Also, I liked this paper because it shows you that there has been a progress made in this dance-technology interaction. This leads me to believe that there could actually be a real future for these kind of performances.
Celine Latulipe, David Wilson, Sybil Huskey, Melissa Word, Arthur Carroll, Erin Carroll, Berto Gonzalez, Vikash Singh, Mike Wirth, Danielle Lottridge
CHI 2010 April 10-15, 2010, Atlanta, GA
Summary
This paper describes a project called Dance.Draw that focuses on research to integrate dance performances with technology. The main goal of the project is to enhance the audience interactions with the dance and its visualizations. The paper claims that this type of technology integration is not new, and several other computer scientists have studied this interaction.The Dance.Draw project began as a performance in January 2008 where the performers dance with gyroscopic mice held in their hands. After the showing, they quickly learned that this technique did not allow the performers to do any type of movement with hand support. Later that year, the next performance experimented with only a subset of the performers holding the mice and including noticeable choreographic movements when the performers passed the mice to each other. This second performance returned great feedback about the audience interaction with the visualizations. The most recent performance was staged at CHI 2010 where the performers used wireless sensors for visualizations.
Discussion
This is the second article that I have read that involves research by Celine Latulipe. It is very interesting to see how she deals where she is trying to integrate technology with some type of art. To be more specific, in the two papers she deals with digital images and dance performance. I like the fact that she emphasizes that even though technology is involved, it cannot be the center piece of the art exhibition. Also, I liked this paper because it shows you that there has been a progress made in this dance-technology interaction. This leads me to believe that there could actually be a real future for these kind of performances.
Friday, January 28, 2011
Paper Reading # 4 - Layered Surveillance
Reference:
Celine Latulipe & Annabel Manning
CHI 2010, April 10–15, 2010, Atlanta, Georgia, USA.
Summary
This paper describes the use of art and technology to explore U.S. - Mexico border crossings and surveillance. Manning describes an interactive way to present her artwork with the help of technology. The purpose of using the technology is to provide the viewers with some control of the artwork. The two techniques that are described are interactive lenses work and interactive layers work. Interactive lenses work portrays static images that are controlled by participants using wireless mice to control particular lenses. With the interactive layers technique participants control various different aspects of the artwork such as brightness, and different levels of detail. Manning is an artist that depicts Latino immigrants under surveillance on the U.S.-Mexico border. Manning provides the digital photos and videos that are used in the interaction, and Latulipe provides the software to create the moving videos.
Discussion
I thought both of the interaction techniques depicted in this paper can make the viewers appreciate the artwork more than a work hanged in a wall. It is interesting how they present technology to increase the engagement of the viewers. What caught my attention was the fact that the artwork being presented by Manning was surveillance of Hispanic immigrants on the U.S. - Mexico border. She believes that artists are important players in current immigration debates, and that is certainly a point of view that I do not agree with.
Celine Latulipe & Annabel Manning
CHI 2010, April 10–15, 2010, Atlanta, Georgia, USA.
Summary
This paper describes the use of art and technology to explore U.S. - Mexico border crossings and surveillance. Manning describes an interactive way to present her artwork with the help of technology. The purpose of using the technology is to provide the viewers with some control of the artwork. The two techniques that are described are interactive lenses work and interactive layers work. Interactive lenses work portrays static images that are controlled by participants using wireless mice to control particular lenses. With the interactive layers technique participants control various different aspects of the artwork such as brightness, and different levels of detail. Manning is an artist that depicts Latino immigrants under surveillance on the U.S.-Mexico border. Manning provides the digital photos and videos that are used in the interaction, and Latulipe provides the software to create the moving videos. Discussion
I thought both of the interaction techniques depicted in this paper can make the viewers appreciate the artwork more than a work hanged in a wall. It is interesting how they present technology to increase the engagement of the viewers. What caught my attention was the fact that the artwork being presented by Manning was surveillance of Hispanic immigrants on the U.S. - Mexico border. She believes that artists are important players in current immigration debates, and that is certainly a point of view that I do not agree with.
Subscribe to:
Comments (Atom)















