Click here to get my current CV in PDF format
I study the relationship between technical change and social change, concentrating on information technologies and change in social institutions. My early academic career (the 1970s through late 1980s) focused on the study of information technology use and effects in complex organizations, mainly municipal corporations and other government entities. This started with a focus on policy, such as whether to centralize or decentralize service structures in organizations. In time, I became more interested in the problem of design and development of sophisticated socio-technical information infrastructures that must function effectively in complex organizational and institutional settings, usually over long periods of time.sampling at the extremesand concentrates on the interesting situation of institutionalized learning in one of the world's most remote places, Antarctica. Epistemic infrastructure and learning in Antarctica are both examples of socio-technical change in highly institutionalized production sectors that I explain below.
high-level requirements analysis,because of its focus on issues of organizational and institutional usability that are seldom considered by software engineers or system developers.
Institutional Factors in Information Technology Innovationthat appeared in Information Systems Research. Implementation of this research program in highly-institutionalized production involved the development of information infrastructure in support of other service and physical infrastructures.
smart phonesin cellular telephony is blurring the distinction between telephony and the Internet. In the past it was highly regulated and institutionalized, although regulatory and institutional structures are changing as a result of cellular telephony. Working with colleagues then at the University of Jyvaskyla in Finland, I studied the evolution of key institutional enablers that explain why the diffusion of cellular telephony occurred much faster in the Nordic countries than it did anywhere else, including in the United States, where the technology was invented by AT&T in 1947. This was a study in technical history, involving access to original source documentation related to signaling protocols, spectrum allocation, and addressing. A complementary assessment of source documentation on these issues was conducted in the U.S. and Japan, the other primary actors in the spread of global cellular telephony. The dominant factors influencing the speed of diffusion of these technologies were each region's unique institutional and regulatory structures that affected the speed with which signaling, spectrum, and addressing standards could be set. Much of this work focused on establishment of the first wide-use analog standard (NMT-450/900) in the subsequent establishment of the GSM-900/1800 digital standard that is now the dominant world standard. The relatively slow start of cellular telephony in the US was due in large measure to the complexities of regulatory structures and policies at the FCC, in which competition for radio spectrum for cellular telephony put the telephone industry into direct conflict with the broadcasting industry, both of which were (and are) regulated by the FCC. This work was supported by the Academy of Finland (roughly equivalent to the US NSF), and conducted with Kalle Lyytinen, Joel West, Ashley Andeen, Vladislav Fomin, Ari Mannien and other colleagues.
cultural institutions,but that more importantly constitute the primary mechanisms for creation of knowledge communities. This research was primarily historical, tracing the rise of such institutions from classical antiquity through the early modern period, and on through the scientific and industrial revolutions. A major focus was on the implications of the Internet and other information infrastructure on the role of such institutions. This work was done initially for the Organization for Economic Cooperation and Development (OECD). My collaborator was Margaret Hedstrom.
dayand
nighteach last six months. But Antarctica has been even more remote than the northern polar region because humans could not reach the continent of Antarctica until the late 19th century. James Cook visited the Antarctic region in the HMS Resolution in the mid-1770s and speculated about a large land mass there, but could not get his sail-powered ship through the pack ice. Steamships were required. Carsten Borchgrevink's team used a steamship to make the first real landing on the continent in 1898, kicking off the Heroic Age of Antarctic exploration that featured Shackleton, Scott, Amundsen and others now known as earlly Antarctic explorers. Permanent settlement waited until the middle of the 20th century, and becoming serioius only as a result of the International Geophysical Year (1957-58). Since then Antarctica has become an important outpost for research of many kinds. My interest in Antarctica is two-fold. First, the contemporary institutional construct of Antarctic research is the Antarctic Treaty System that was born largely because of the Cold War and now must change to fit a post-Cold War world. Second, the kind of research expanding most rapidly in Antarctica uses remote sensing and collection of vast amounts of data that rely on cyberinfrastructure. The interplay of this institutional and technical change provides a good opportunity to examine aspects of high-level requirements.
just right,to reduce things enough to accomplish what we set out to do. It is common in requirements analysis to stop too early. Here's an example: A system developer claims that a specification for a system is based on a statement of requirements. Whose requirements? The users' requirements, says the developer. How does the developer know that requirements actually reflect what the users want or need? The users signed off on them, says the developer. Do the users know what they want or need? That's their problem, says the developer. Actually, that is everybody's problem, or at least everybody involved in creating the system. If the requirements aren't right, the specifications cannot be right, and the resulting system will not be right. Stopping too early does not help. Why don't the users know what they want or need? And how is this problem to be fixed? Getting good requirements is harder than it appears in many cases because users do not know what they want or need. That's when one has to go to the high level and begin to determine what the requirements must be. That's the gist of my work: determining what requirements must be. A corollary is determining what requirements cannot be. The scheme is to narrow the search space around requirements such that the actual requirements are probably somewhere within the residual. It ain't pretty, but it's better than guessing.