Programming Support I

Wed 3:30-5:00 pm - Catalina Ballroom
session chair: Emerson Murphy-Hill
session chairEmerson Murphy-Hill, North Carolina State University, United States
AutoMan: A Platform for Integrating Human-Based and Digital Computation
Daniel W. Barowy, University of Massachusetts, Amherst, United States
Charlie Curtsinger, University of Massachusetts, Amherst, United States
Emery D. Berger, University of Massachusetts, Amherst, United States
Andrew McGregor, University of Massachusetts, Amherst, United States

Humans can perform many tasks with ease that remain difficult or impossible for computers. Crowdsourcing platforms like Amazon's Mechanical Turk make it possible to harness human-based computational power on an unprecedented scale. However, their utility as a general-purpose computational platform remains limited. The lack of complete automation makes it difficult to orchestrate complex or interrelated tasks. Scheduling human workers to reduce latency costs real money, and jobs must be monitored and rescheduled when workers fail to complete their tasks. Furthermore, it is often difficult to predict the length of time and payment that should be budgeted for a given task. Finally, the results of human-based computations are not necessarily reliable, both because human skills and accu- racy vary widely, and because workers have a financial incentive to minimize their effort.

This paper introduces AutoMan, the first fully automatic crowdprogramming system. AutoMan integrates human-based computations into a standard programming language as ordinary function calls, which can be intermixed freely with traditional functions. This abstraction allows AutoMan programmers to focus on their programming logic. An AutoMan program specifies a confidence level for the overall computation and a budget. The AutoMan runtime system then transparently manages all details necessary for scheduling, pricing, and quality control. AutoMan automatically schedules human tasks for each computation until it achieves the desired confidence level; monitors, reprices, and restarts human tasks as necessary; and maximizes parallelism across human workers while staying under budget. We demonstrate AutoMan's effectiveness at harnessing the powers of human computation.

Talk versus Work: Characteristics of Developer Collaboration on the Jazz Platform
Subhajit Datta, IBM Research, India
Renuka Sindhgatta, IBM Research, India
Bikram Sengupta, IBM Research, India

IBM's Jazz initiative offers a state-of-the-art collaborative development environment (CDE) facilitating developer in-teractions around interdependent units of work. In this paper, we analyze development data across two versions of a ma-jor IBM product developed on the Jazz platform, covering in total 19 months of development activity, including 17,000+ work items and 61,000+ comments made by more than 190 developers in 35 locations. By examining the relation be-tween developertalk andwork, we find evidence that devel-opers maintain a reasonably high level of connectivity with peer developers with whom they share work dependencies, but the span of a developer's communication goes much be-yond the known dependencies of his/her work items. Using a multiple linear regression mode, we find that the number of defects owned by a developer is influenced by the number of other developers (s)he is connected through talk, his/her in-terpersonal influence in the network of work dependencies, the number of work items (s)he comments on, and the num-ber work items (s)he owns. These influences are maintained even after controlling for workload, role, work dependency, and connection related factors. We discuss the implications of our results for collaborative software development and project governance.

Speculative Analysis of Integrated Development Environment Recommendations
Kivanc Muslu, University of Washington, United States
Yuriy Brun, University of Washington, United States
Reid Holmes, University of Waterloo, Canada
Michael D. Ernst, University of Washington, United States
David Notkin, University of Washington, United States

Modern integrated development environments include recommendation tools that suggest common tasks and automatically perform the ones chosen by a developer. These tasks include refactorings, auto-completion, and error correction. These tools present little or no information about the effects--consequences--that performing the recommendation will have on the program. For example, a rename refactoring may: (1) modify the source code without changing program semantics, or (2) modify the source code and (incorrectly) change program semantics, or (3) modify the source code and (incorrectly) create compilation errors, or (4) show a name collision warning and require developer interaction, or (5) show an error without changing the program at all. It can be tricky for the developer to compute the consequences of a recommendation, which puts an extra burden on the developers when using these tools. This paper aims to reduce this burden with a technique that informs the developer of the consequences of code transformations. Taking Eclipse Quick Fix as an example, we built a plug-in, Quick Fix Scout, that computes the consequences of Quick Fix recommendations. Our experiments with 14 users demonstrate that developers complete compilation error removal tasks 10% faster when using Quick Fix Scout compared to users employing Eclipse's traditional Quick Fix alone.

Static Type Systems (Sometimes) have a Positive Impact on the Usability of Undocumented Software: An Empirical Evaluation
Clemens Mayer, University Duisburg-Essen, Germany
Stefan Hanenberg, University Duisburg-Essen, Germany
Romain Robbes, University of Chile, Chile
Eric Tanter, University of Chile, Chile
Andreas Stefik, Southern Illinois University Edwardsville, United States

Static and dynamic type systems (as well as more recently gradual type systems) are an important research topic in programming language design. Although the study of such systems plays a major role in research, relatively little is known about the impact of type systems on software development. Perhaps one of the more common arguments for static type systems is that they require developers to annotate their code with type names, which is thus claimed to improve the documentation of software. In contrast, one common argument against static type systems is that they decrease flexibility, which may make them harder to use. While positions such as these, both for and against static type systems, have been documented in the literature, there is little rigorous empirical evidence for or against either position. In this paper, we introduce a controlled experiment where 27 subjects performed programming tasks on an undocumented API with a static type system (which required type annotations) as well as a dynamic type system (which does not). Our results show that for some types of tasks, programmers were afforded faster task completion times using a static type system, while for others, the opposite held. In this work, we document the empirical evidence that led us to this conclusion and conduct an exploratory study to try and theorize why.