ACM Books Logo
ACM BOOKS is a new series of high quality books for the computer science community, published by ACM in collaboration with Morgan & Claypool Publishers.  learn more

The Handbook of Multimodal-Multisensor Interfaces: Foundations, User Modeling, and Common Modality Combinations - Volume 1
Sharon Oviatt, Björn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Krüger – 2017

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces-user input involving new media (speech, multi-touch, gestures, writing) embedded in multimodal-multisensor interfaces. These interfaces support smartphones, wearables, in-vehicle, robotic, and many other applications that are now highly competitive commercially.

This edited collection is written by international experts and pioneers in the field. It provides a textbook for students, and a reference and technology roadmap for professionals working in this rapidly emerging area.

Volume 1 of the handbook presents relevant theory and neuroscience foundations for guiding the development of high-performance systems. Additional chapters discuss approaches to user modeling, interface design that supports user choice, synergistic combination of modalities with sensors, and blending of multimodal input and output. They also highlight an in-depth look at the most common multimodal-multisensor combinations- for example, touch and pen input, haptic and non-speech audio output, and speech co-processed with visible lip movements, gaze, gestures, or pen input. A common theme throughout is support for mobility and individual differences among users-including the world's rapidly growing population of seniors.

These handbook chapters provide walk-through examples and video illustrations of different system designs and their interactive use. Common terms are defined, and information on practical resources is provided (e.g., software tools, data resources) for hands-on project work to develop and evaluate multimodal-multisensor systems. In the final chapter, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces should be designed in the future to most effectively advance human performance.

continue reading
Communities of Computing: Computer Science and Society in the ACM
Thomas J. Misa – 2016

“Communities of Computing” is the first book-length history of the Association for Computing Machinery (ACM), founded in 1947 and with a membership today of 100,000 worldwide. It profiles ACM's notable SIGs, active chapters, and individual members, setting ACM's history into a rich social and political context. The book's 12 core chapters are organized into three thematic sections. “Defining the Discipline” examines the 1960s and 1970s when the field of computer science was taking form at the National Science Foundation, Stanford University, and through ACM's notable efforts in education and curriculum standards.

“Broadening the Profession” looks outward into the wider society as ACM engaged with social and political issues-and as members struggled with balancing a focus on scientific issues and awareness of the wider world. Chapters examine the social turbulence surrounding the Vietnam War, debates about the women's movement, efforts for computing and community education, and international issues including professionalization and the Cold War.

“Expanding Research Frontiers” profiles three areas of research activity where ACM members and ACM itself shaped notable advances in computing, including computer graphics, computer security, and hypertext.

Featuring insightful profiles of notable ACM leaders, such as Edmund Berkeley, George Forsythe, Jean Sammet, Peter Denning, and Kelly Gotlieb, and honest assessments of controversial episodes, the volume deals with compelling and complex issues involving ACM and computing. It is not a narrow organizational history of ACM committees and SIGs, although much information about them is given. All chapters are original works of research. Many chapters draw on archival records of ACM's headquarters, ACM SIGs, and ACM leaders. This volume makes a permanent contribution to documenting the history of ACM and understanding its central role in the history of computing.

continue reading
Text Data Management and Analysis: A Practical Introduction to Information Retrieval and Text Mining
ChengXiang Zhai, Sean Massung – 2016

Recent years have seen a dramatic growth of natural language text data, including web pages, news articles, scientific literature, emails, enterprise documents, and social media (such as blog articles, forum posts, product reviews, and tweets). This has led to an increasing demand for powerful software tools to help people manage and analyze vast amounts of text data effectively and efficiently. Unlike data generated by a computer system or sensors, text data are usually generated directly by humans, and capture semantically rich content. As such, text data are especially valuable for discovering knowledge about human opinions and preferences, in addition to many other kinds of knowledge that we encode in text. In contrast to structured data, which conform to well-defined schemas (thus are relatively easy for computers to handle), text has less explicit structure, requiring computer processing toward understanding of the content encoded in text. The current technology of natural language processing has not yet reached a point to enable a computer to precisely understand natural language text, but a wide range of statistical and heuristic approaches to management and analysis of text data have been developed over the past few decades. They are usually very robust and can be applied to analyze and manage text data in any natural language, and about any topic.

This book provides a systematic introduction to many of these approaches, with an emphasis on covering the most useful knowledge and skills required to build a variety of practically useful text information systems. Because humans can understand natural languages far better than computers can, effective involvement of humans in a text information system is generally needed and text information systems often serve as intelligent assistants for humans. Depending on how a text information system collaborates with humans, we distinguish two kinds of text information systems. The first is information retrieval systems which include search engines and recommender systems; they assist users in finding from a large collection of text data the most relevant text data that are actually needed for solving a specific application problem, thus effecively turning big raw text data into much smaller relevant text data that can be more easily processed by humans. The second is text mining application systems; they can assist users in analyzing patterns in text data to extract and discover useful actionable knowledge directly useful for task completion or decision making, thus providing more direct task support for users. This book covers the major concepts, techniques, and ideas in information retrieval and text data mining from a practical viewpoint, and includes many hands-on exercises designed with a companion software toolkit (i.e., MeTA) to help readers learn how to apply techniques of information retrieval and text mining to real-world text data and how to experiment with and improve some of the algorithms for interesting application tasks. This book can be used as a textbook for computer science undergraduates and graduates, library and information scientists, or as a reference book for practitioners working on relevant problems in managing and analyzing text data.

continue reading
An Architecture for Fast and General Data Processing on Large Clusters
Matei Zaharia – 2016

Today, a myriad data sources, from the Internet to business operations to scientific instruments, produce large and valuable data streams. However, the processing capabilities of single machines have not kept up with the size of data. As a result, organizations increasingly need to scale out these computations to clusters of hundreds of machines.

At the same time, the speed and sophistication required of data processing have grown. In addition to simple queries, complex algorithms like machine learning and graph analysis are becoming common. And in addition to batch processing, streaming analysis of real-time data is required to let organizations take timely action. Future computing platforms will need to not only scale out traditional workloads, but support these new applications too.

This book, a revised version of the 2014 ACM Dissertation Award winning dissertation, proposes an architecture for cluster computing systems that can tackle emerging data processing workloads at scale. Whereas early cluster computing systems, like MapReduce, handled batch processing, our architecture also enables streaming and interactive queries, while keeping MapReduce's scalability and fault tolerance. And whereas most deployed systems only support simple one-pass computations (e.g., SQL queries), ours also extends to the multi-pass algorithms required for complex analytics like machine learning. Finally, unlike the specialized systems proposed for some of these workloads, our architecture allows these computations to be combined, enabling rich new applications that intermix, for example, streaming and batch processing.

We achieve these results through a simple extension to MapReduce that adds primitives for data sharing, called Resilient Distributed Datasets (RDDs). We show that this is enough to capture a wide range of workloads. We implement RDDs in the open source Spark system, which we evaluate using synthetic and real workloads. Spark matches or exceeds the performance of specialized systems in many domains, while offering stronger fault tolerance properties and allowing these workloads to be combined. Finally, we examine the generality of RDDs from both a theoretical modeling perspective and a systems perspective.

This version of the dissertation makes corrections throughout the text and adds a new section on the evolution of Apache Spark in industry since 2014. In addition, editing, formatting, drawing of illustrations, and links for the references have been added.

continue reading
Reactive Internet Programming: State Chart XML in Action
Franck Barbier – 2016

Is Internet software so different from “ordinary” software? This book practically answers this question through the presentation of a software design method based on the State Chart XML W3C standard along with Java, Web enterprise, Internet-of-Things, and Android applications, in particular, are seamlessly specified and implemented from “executable models.”

Internet software puts forward the idea of event-driven or reactive programming, as pointed out in Bonér et al.'s “Reactive Manifesto” (http://www.reactivemanifesto.org). It tells us that reactiveness is a must. However, beyond concepts, software engineers require effective means with which to put reactive programming into practice. This book's purpose is to outline and explain such means.

The lack of professional examples in the literature that illustrate how reactive software should be shaped can be quite frustrating. Therefore, this book helps to fill in that gap by providing in-depth professional case studies that contain comprehensive details and meaningful alternatives. Furthermore, these case studies can be downloaded for further investigation.

Internet software requires higher adaptation, at run time in particular. After reading Reactive Internet Programming, the reader therefore will be ready to enter the forthcoming Internet era.

continue reading
Verified Functional Programming in Agda
Aaron Stump – 2016

Agda is an advanced programming language based on Type Theory. Agda's type system is expressive enough to support full functional verification of programs, in two styles. In external verification, we write pure functional programs and then write proofs of properties about them. The proofs are separate external artifacts, typically using structural induction. In internal verification, we specify properties of programs through rich types for the programs themselves. This often necessitates including proofs inside code, to show the type checker that the specified properties hold. The power to prove properties of programs in these two styles is a profound addition to the practice of programming, giving programmers the power to guarantee the absence of bugs, and thus improve the quality of software more than previously possible.

Verified Functional Programming in Agda is the first book to provide a systematic exposition of external and internal verification in Agda, suitable for undergraduate students of Computer Science. No familiarity with functional programming or computer-checked proofs is presupposed.

The book begins with an introduction to functional programming through familiar examples like booleans, natural numbers, and lists, and techniques for external verification. Internal verification is considered through the examples of vectors, binary search trees, and Braun trees. More advanced material on type-level computation, explicit reasoning about termination, and normalization by evaluation is also included. The book also includes a medium-sized case study on Huffman encoding and decoding.

continue reading
The VR Book: Human-Centered Design for Virtual Reality
Jason Jerald – 2015

Virtual reality (VR) can provide our minds with direct access to digital media in a way that seemingly has no limits. However, creating compelling VR experiences is an incredibly complex challenge. When VR is done well, the results are brilliant and pleasurable experiences that go beyond what we can do in the real world. When VR is done badly, not only is the system frustrating to use, but it can result in sickness. There are many causes of bad VR; some failures come from the limitations of technology, but many come from a lack of understanding perception, interaction, design principles, and real users. This book discusses these issues by emphasizing the human element of VR. The fact is, if we do not get the human element correct, then no amount of technology will make VR anything more than an interesting tool confined to research laboratories. Even when VR principles are fully understood, the first implementation is rarely novel and almost never ideal due to the complex nature of VR and the countless possibilities that can be created. The VR principles discussed in this book will enable readers to intelligently experiment with the rules and iteratively design towards innovative experiences.

continue reading
Ada's Legacy: Cultures of Computing from the Victorian to the Digital Age
Robin Hammerman, Andrew L. Russell – 2015

Ada's Legacy illustrates the depth and diversity of writers, thinkers, and makers who have been inspired by Ada Lovelace, the English mathematician and writer. The volume, which commemorates the bicentennial of Ada's birth in December 1815, celebrates Lovelace's many achievements as well as the impact of her life and work, which reverberated widely since the late 19th century. In the 21st century we have seen a resurgence in Lovelace scholarship, thanks to the growth of interdisciplinary thinking and the expanding influence of women in science, technology, engineering, and mathematics. Ada's Legacy is a unique contribution to this scholarship. Here, the editors present work on topics previously unknown to coexist in print: Ada's collaboration with Charles Babbage, the development of the Ada programming language, Ada's position in the Victorian and Steampunk literary genres, Ada's representation in and inspiration of contemporary art and comics, and Ada's continued relevance in discussions around gender and technology in the digital age. With the 200th anniversary of Ada Lovelace's birth on December 10, 2015, we believe that the timing is perfect to publish this collection. Because of its broad focus on subjects that reach far beyond the life and work of Ada herself, Ada's Legacy will appeal to readers who are curious about Ada's enduring importance in computing and the wider world.

The idea for this book originated from the first academic conference of its kind, Ada Lovelace: An Interdisciplinary Conference Celebrating her Achievements and Legacy, which took place at Stevens Institute of Technology (Hoboken, New Jersey) on October 18, 2013, under the auspices of the Institute's College of Arts and Letters. The time was definitely right to bring this conference to Stevens, which was until 1971 a male-only school. By 1982, Stevens became the first major institute in the United States to implement a personal computer requirement for its students. Around this time, a pioneering technology project resulted in the networking of the entire Stevens campus, creating one of the nation's first intranets. Additionally, the recent development of undergraduate programs in the College of Arts and Letters, including Gender Studies as well as Science and Technology Studies, which strongly anchors women in STEM, clearly made Stevens well positioned to host a conference celebrating Lovelace's achievements and legacy. The conference brought together international scholars from across the disciplines to coincide with the week celebrating Ada Lovelace Day. Tremendous interest circulated in advance of the conference, particularly among computing history specialists and 19th-century literary scholars, and it escalated immediately following the proceedings. Tom Misa, director of the Charles Babbage Institute and a featured speaker at the conference, numbered chiefly among those who caught this interest. He envisioned the possibility for a book project to develop concrete examination of ideas inspired by the proceedings. The conference organizer, Robin Hammerman, and Andrew Russell, director of the College of Arts and Letters Science and Technology Studies program, enthusiastically agreed to collaborate as editors for this interdisciplinary collection.

Many of the papers published in this volume first were presented at the conference in October 2013. We are pleased to acknowledge the supporters of that conference: Dr. Lisa Dolling, former Dean of the College of Arts & Letters at Stevens Institute of Technology, and Dr. George Korfiatis, Provost of Stevens Institute of Technology.

The editors are grateful to Thomas J. Misa, Series Editor for ACM's History of Computing, for skillfully overseeing the development of this project from its inception to completion. We also are happy to thank Diane Cerra, Executive Editor at Morgan & Claypool Publishers, for her kind and capable attention to the production of this volume. And we are delighted that Sydney Padua graciously agreed to create original illustrations for Ada's Legacy.

continue reading
Edmund Berkeley and the Social Responsibility of Computer Professionals
Bernadette Longo – 2015

Edmund C. Berkeley (1909--1988) was a mathematician, insurance actuary, inventor, publisher, and a founder of the Association for Computing Machinery (ACM). His book “Giant Brains or Machines That Think” (1949) was the first explanation of computers for a general readership. His journal "Computers and Automation" (1951--1973) was the first journal for computer professionals. In the 1950s, Berkeley developed mail-order kits for small, personal computers such as Simple Simon and the Brainiac. In an era when computer development was on a scale barely affordable by universities or government agencies, Berkeley took a different approach and sold simple computer kits to average Americans. He believed that digital computers, using mechanized reasoning based on symbolic logic, could help people make more rational decisions. The result of this improved reasoning would be better social conditions and fewer large-scale wars. Although Berkeley's populist notions of computer development in the public interest did not prevail, the events of his life exemplify the human side of ongoing debates concerning the social responsibility of computer professionals. This biography of Edmund Berkeley, based on primary sources gathered over 15 years of archival research, provides a lens to understand social and political decisions surrounding early computer development, and the consequences of these decisions in our 21st century lives.

continue reading
Candidate Multilinear Maps
Sanjam Garg – 2015

Cryptography seems like “black magic.” enabling tasks that often seem paradoxical or simply just impossible. Like the space explorers, cryptographers often wonder, “what are the boundaries of this world of 'black magic'?” This work lays one of the founding stones in furthering our understanding of these edges.

In it, Garg describes plausible lattice-based constructions with properties that approximate the sought-after multilinear maps in hard-discrete-logarithm groups. The security of his constructions relies on seemingly hard problems in ideal lattices, which can be viewed as extensions of the assumed hardness of the NTRU function.

These new constructions radically enhance the cryptography tool set and open a floodgate of applications. We present a survey of these applications.

This book is based on Sanjam Garg's PhD thesis, which was an extended version of a paper titled “Candidate Multilinear Maps from Ideal Lattices,” co-authored with Craig Gentry and Shai Halevi. His original thesis won the 2013 ACM Dissertation Award.

continue reading
Smarter Than Their Machines: Oral Histories of Pioneers in Interactive Computing
John Cullinane – 2014

Smarter Than Their Machines -- The Oral Histories of the Pioneers of Interactive Computing is based on oral histories archived at the Charles Babbage Institute, University of Minnesota. Included are the oral histories of some key pioneers of the computer industry selected by John that led to interactive computing, such as Richard Bloch, Gene Amdahl, Herbert W. Robinson, Sam Wyly, J.C.R. Licklider, Ivan Sutherland, Larry Roberts, Robert Kahn, Marvin Minsky, Michael Dertouzos, and Joseph Traub, as well as his own. John has woven them together via introductions that is, in essence, a personal walk down the computer industry road. John had the unique advantage of having been part of, or witness to, much of the history contained in these oral histories beginning as a co-op student at Arthur D. Little, Inc., in the 1950's. Eventually, he would become a pioneer in his own right by creating the computer industry's first successful software products company (Cullinane Corporation). However, an added benefit of reading these oral histories is that they contain important messages for our leaders of today, at all levels, including that government, industry, and academia can accomplish great things when working together in an effective way. This is how the computer industry was created, which then led to the Internet, both totally unanticipated just 75 years ago.

continue reading
A Framework for Scientific Discovery through Video Games
Seth Cooper – 2014

As science becomes increasingly computational, the limits of what is computationally tractable become a barrier to scientific progress. Many scientific problems, however, are amenable to human problem solving skills that complement computational power. By leveraging these skills on a larger scale---beyond the relatively few individuals currently engaged in scientific inquiry---there is the potential for new scientific discoveries.

This book presents a framework for mapping open scientific problems into video games. The game framework combines computational power with human problem solving and creativity to work toward solving scientific problems that neither computers nor humans could previously solve alone. To maximize the potential contributors to scientific discovery, the framework designs a game to be played by people with no formal scientific background and incentivizes long-term engagement with a myriad of collaborative or competitive reward structures. The framework allows for the continual coevolution of the players and the game to each other: as players gain expertise through gameplay, the game changes to become a better tool.

The framework is validated by being applied to proteomics problems with the video game Foldit. Foldit players have contributed to novel discoveries in protein structure prediction, protein design, and protein structure refinement algorithms. The coevolution of human problem solving and computer tools in an incentivized game framework is an exciting new scientific pathway that can lead to discoveries currently unreachable by other methods.

continue reading
Embracing Interference in Wireless Systems
Shyamnath Gollakota – 2014

The wireless medium is a shared resource. If nearby devices transmit at the same time, their signals interfere, resulting in a collision. In traditional networks, collisions cause the loss of the transmitted information. For this reason, wireless networks have been designed with the assumption that interference is intrinsically harmful and must be avoided.

This book, a revised version of the author's award-winning Ph.D. dissertation, takes an alternate approach: Instead of viewing interference as an inherently counterproductive phenomenon that should to be avoided, we design practical systems that transform interference into a harmless, and even a beneficial phenomenon. To achieve this goal, we consider how wireless signals interact when they interfere, and use this understanding in our system designs. Specifically, when interference occurs, the signals get mixed on the wireless medium. By understanding the parameters of this mixing, we can invert the mixing and decode the interfered packets; thus, making interference harmless. Furthermore, we can control this mixing process to create strategic interference that allow decodability at a particular receiver of interest, but prevent decodability at unintended receivers and adversaries. Hence, we can transform interference into a beneficial phenomenon that provides security.

Building on this approach, we make four main contributions: We present the first WiFi receiver that can successfully reconstruct the transmitted information in the presence of packet collisions. Next, we introduce a WiFi receiver design that can decode in the presence of high-power cross-technology interference from devices like baby monitors, cordless phones, microwave ovens, or even unknown technologies. We then show how we can harness interference to improve security. In particular, we develop the first system that secures an insecure medical implant without any modification to the implant itself. Finally, we present a solution that establishes secure connections between any two WiFi devices, without having users enter passwords or use pre-shared secret keys.

continue reading
Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers
Bryan Jeffery Parno – 2014

As society rushes to digitize sensitive information and services, it is imperative that we adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted [Karger et al. 1991, Gold et al. 1984, Ames 1981].

In this book, a revised version of my doctoral dissertation, originally written while studying at Carnegie Mellon University, I argue that we can resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems.We support this premise over the course of the following chapters.

  • Introduction. This chapter introduces the notion of bootstrapping trust from one device or service to another and gives an overview of how the subsequent chapters fit together.
  • Background and related work. This chapter focuses on existing techniques for bootstrapping trust in commodity computers, specifically by conveying information about a computer's current execution environment to an interested party. This would, for example, enable a user to verify that her computer is free of malware, or that a remote web server will handle her data responsibly.
  • Bootstrapping trust in a commodity computer. At a high level, this chapter develops techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. While the problem is simply stated, finding a solution that is both secure and usable with existing hardware proves quite difficult.
  • On-demand secure code execution. Rather than entrusting a user's data to the mountain of buggy code likely running on her computer, in this chapter, we construct an on-demand secure execution environment which can perform security sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today.
  • Using trustworthy host data in the network. Having established an environment for secure code execution on an individual computer, this chapter shows how to extend trust in this environment to network elements in a secure and efficient manner. This allows us to reexamine the design of network protocols and defenses, since we can now execute code on end hosts and trust the results within the network.
  • Secure code execution on untrusted hardware. Lastly, this chapter extends the user's trust one more step to encompass computations performed on a remote host (e.g., in the cloud).We design, analyze, and prove secure a protocol that allows a user to outsource arbitrary computations to commodity computers run by an untrusted remote party (or parties) who may subject the computers to both software and hardware attacks. Our protocol guarantees that the user can both verify that the results returned are indeed the correct results of the specified computations on the inputs provided, and protect the secrecy of both the inputs and outputs of the computations. These guarantees are provided in a non-interactive, asymptotically optimal (with respect to CPU and bandwidth) manner.

Thus, extending a user's trust, via software, hardware, and cryptographic techniques, allows us to provide strong security protections for both local and remote computations on sensitive data, while still preserving the performance and features of commodity computers.

continue reading
How to access the  ACM BOOKS:

ACM Members can have full access to all Collections in the ACM Books series by purchasing an annual subscription.

  • Professional Members $29 per year
  • Student Members $10 per year

Individuals who are not members of ACM may purchase individual titles only, through popular book and e-book retailers.

Institutions may purchase individual collections from the ACM Books series.

get the details

ACM BOOKS Program



The ACM Digital Library is published by the Association for Computing Machinery. Copyright © 2017 ACM, Inc.
Terms of Usage   Privacy Policy   Code of Ethics   Contact Us