Saturday, May 17, 2008

Theoretical Computer Science (TCS)

Theoretical Computer Science (TCS), broadly speaking, is concerned with understanding the very nature of computation:
What problems can be solved by computers? And how efficiently can such problems be solved? Can `hard' problems be used to our advantage in any way? TCS encompasses research in such diverse areas as complexity theory, algorithms, cryptography, distributed computing, machine learning, and more; the common thread is a focus on precise models and rigorous mathematical analysis of particular problems within those models.
We have a strong group of faculty actively working in this area. We also have a large number of students who are encouraged to get involved early by attending weekly TCS reading groups run by the group. See http://www.cs.umd.edu/areas/Theory/ for further details. Brief descriptions of faculty research interests follow.
Bill Gasarch is interested in complexity theory and TCS-at-large; his most recent research has been in the area of communication complexity and applications of combinatorics to problems in this area. To get a flavor for this type of research, consider the following problem: Alice holds as input an n-bit string A, while Bob holds as input an n-bit string B. Alice and Bob want to determine whether or not their strings are equal. Clearly, there is the trivial solution in which Alice sends all of A to Bob; is there a better protocol that communicates fewer bits? What is the minimum number of bits they need to exchange in order to solve the problem? Besides their inherent beauty, problems of this type have ties to other areas of TCS (e.g., proving lower bounds on computational efficiency of certain problems) and possible practical applications as well (e.g., minimizing resource-intensive communication in sensor networks).
Jonathan Katz focuses primarily on cryptography and, more generally, those aspects of computer security that can be precisely modeled and formally reasoned about. His research has spanned a wide range of topics, from work on theoretical foundations of cryptography (e.g., analyzing protocols for secure distributed computation), to the design of efficient cryptosystems with novel properties (e.g., "anonymous signatures" and identity-based encryption), to studies of secure algorithms for peer-to-peer and wireless sensor networks. Amazingly and sometimes counter-intuitively, it is possible to construct cryptographic solutions that can be proven secure against arbitrary (yet efficient) attackers as long as certain problems, such as factoring, are as "hard" as is commonly believed.Jonathan Katz focuses primarily on cryptography and, more generally,
those aspects of computer security that can be precisely modeled and
formally reasoned about. His research has spanned a wide range of
topics. He has worked on theoretical foundations of cryptography. One
example of this is secure distributed computation, which is when
different computers have information that they want to jointly use
to compute something; however, they do not want to reveal their
information. He has designed efficient cryptosystems with novel
properties (e.g., ``anonymous signatures's and identity-based encryption), com He has also worked on Computational number theory,
secure algorithms for peer-to-peer and wireless sensor networks.
//-->
Samir Khuller's research centers around the design of efficient graph algorithms and approximation algorithms. He is particularly interested in problems for which it is widely believed (for mathematical reasons) that there is no fast algorithm to obtain the optimal solutions (namely NP-hard problems. Recently, one specific research project of Samir's has dealt with algorithms for storage and movement of data; this work has applications to large multimedia data storage systems. Some of the principal challenges that arise in this context are: (a) deciding how many copies of each data item to store, (b) determining the exact layout of data on a set of servers, and (c) dealing with changing workloads and dynamic data access patterns, all while seeking to maximize access to the data. More recently, Samir has also been working on developing algorithms for finding the cheapest paths that incorporate actual costs (think: gas prices) and not just distances.
Clyde Kruskal recently solved a problem in geometric combinatorics related to ``colorings of the plane (A coloring of the plane is simply a color assignment to each point in the plane). Consider the statement: in any 2-coloring of the plane there exist two points of the same color exactly 1 inch apart. This is true (and easy to prove). It is also true for 3-colorings, but false for 7-colorings. The cases of 4,5, and 6 are open, though Erdos conjectured that 4 is the maximum value for which the statement is true. Clyde has looked at the problem in restricted regions of the plane showing, for example, that a 2-coloring of a (small) square exists in which there are no points of the same color exactly 1 inch apart.
Dave Mount's interests are in computational geometry, that is, the study of algorithms and data structures for geometric problems. He is particularly interested in problems that have applications in areas such as computer graphics, computer vision, and manufacturing. He is currently working on three projects. The first involves the computational complexity of approximate range searching. The problem is to store a large multi-dimensional point set in a data structure so that the number of points lying within some given query region (a sphere or cube, for example) can be computed very efficiently. In the approximate setting the range shape is treated as a fuzzy object, and points that are sufficiently close to the boundary may either be counted or not.
The second project involves efficient data structures for point-set pattern matching in a database, where the point sets may be subject to geometric transformations (such as translation and rotation) or the introduction of spurious or outlying points. The third project involves efficient algorithms for geometric interpolation based on a method called Kriging. This work is being applied to data fusion applications in remote sensing and geostatistics.
The work of Aravind Srinivasan lies generally in the area of randomized algorithms. His research has focused most recently on probabilistic models and algorithms in epidemiology, networking, and combinatorial optimization. Examples include: models for disease-spread in large urban areas, and efficient algorithms to control such spread; information-dissemination in unstructured peer-to-peer networks; estimating the capacity of wireless networks; and approximation algorithms for scheduling problems.
Uzi Vishkin's research interests center around parallel algorithms and architectures. Facilitating a transition into ubiquitous parallel computing has been a strategic objective for computer science and engineering since its inception in the 1940s. A theory enthusiast, the overriding theme guiding his work was using theory to guide the rest of the field in addressing this strategic objective. Key components in his comprehensive plan include: (i) the very rich PRAM parallel algorithmic theory, and (ii) a PRAM-On-Chip vision of a computer system framework. The latter provides a powerful approach to multi-core architectures, and, in particular, the exponential increase in the number of cores in the roadmap of most vendors into the late 2010s. Recently, he has focused on the feedback loop between algorithms, their programming and implementation, and architecture, as well as their performance modeling.
Most of the funding for theory comes from NSF.

No comments: