FLASH INFORMATIQUE FI



Why do we need to understand IT risk ?




Patrick AMON


It is a truism that our world is increasingly interconnected. This interconnectivity, surprisingly, is not new. In colonial times (during which, famously, "the sun never set on the British empire") the colonies and the "motherland" were already tightly intertwined economically, but modern communications has enormously furthered this trend, in particular by cutting down dramatically the time lags involved. "Just-in-time" manufacturing has effectively globalized the production product whereby a failure of a single element in the production chain can disrupt nearly instantaneously the entire chain of production (e.g. GM strike that disrupted the entire manufacturing process. provide specifics ). One should not extrapolate and pretend that integrated production is entirely new
- large scale integrated production systems date back to antiquity and a route through the Hudson river and been designed during the Great Depression to manufacture steel such that it arrived still warm to the Empire State Building construction site. Nevertheless the overall scale of integration at a global scale is relatively new, and has been made possible thanks to the emergence of modern technology, in particular, although not exclusively, the Internet.

This interconnectedness brings new urgency to the necessity to understand the risk stemming therefrom, since a local disturbance can have widespread (nonlocal) effect.

Such risk analysis falls within the category known in financial jargon as "operational risk" i.e. that which falls outside of the better-defined and understood silos of credit, market, and perhaps liquidity risks. Market risk is the risk that market valuations change. Credit risk is that which is concerned with counterparties defaulting on a given transaction. Liquidity risk is that which deals with a transaction not being able to be completed for lack of an available counterparty (and not the failure of a committed counterparty to honor his or her commitments).

Internet technology, by virtue of its design, is designed to offer some redundancy. Most points on a given network are generally reachable through more than one route -at least down to a certain scale. This redundancy may indeed break down at the machine level, where a given file present on a given machine may not be reachable if that machine’s power cord is pulled out. On the other hand, the paths leading to the machine will in generally go through an Internet Service Provider which in general will be connected to other segments of the Internet through a web of paths, most pairs of points offering more than one path between them. An interesting point to notice, however, is that the individual machine will in general be connected to a single Ethernet cable, so the degree of redundancy is not constant across the entire path from one machine to the other. If one draws a line between the two machines, the points closest to the middle of the line are likely to have very different connectivity and redundancy than those on the tail end of the line. The exact dependence between this degree of redundancy and the location along the signal path is not at all clear and is one of the fundamental issues we, at ISIS are studying.

This relative redundancy of IT infrastructure, is however lessened by the predominance of a small number of vendors and technologies (monoculture) which facilitates the spread of a given failure type across the entire network. The nature of such risk, however, is quite different from that which aspects another large-scale network i.e. the electric grid, in which the enormous cost both financial and environmental prevent redundant structures from being built, causing a single localized event such as a localized failure at a single point to cause widespread failure (e.g. failure in Ohio having caused disruptions across the entire East Coast). The problem here is the homogeneity of existing infrastructure, not lack of redundancy (although as we alluded to above that’s another kind of problem). On the Internet, while a small number of systems (e.g. root DNS servers) are central to the overall system, there is significant (albeit not in ?nite, and often not as great as often believed) redundancy built into the very fabric of the system. Yet this redundancy is moot if a small number of root causes can disrupt quickly a large number of agents all equally unprotected against it. In other words, once one has figure out how to take down a given machine, it’s straightforward to apply the same trick to take down every machine of the same type, which might actually represent a significant fraction of the technology required to get from a given point to another. This is similar to a pandemic in which a single infection simply propagates to many hosts who are all equally vulnerable to the infection. Having a more diverse population increases the likelihood that segments of the population may already have encountered the germ and therefore be immune to its deleterious effects.

So redundancy is not enough. Diversity is also essential. But both diversity and redundancy are directly at odds with economies of scale, which tend to consolidate and push infrastructures towards the greatest uniformity possible - at least as long as the cost of mitigating IT risk isn’t factored into the equation.

A further disincentive to redundancy is the need for both confidentiality and traceability of data. It is obvious that the more copies of a given piece of information exist, and the more ways there are to get to this information, the more difficult it is to properly secure this information. Ultimately, the most secure data is one that’s locked away safely, inaccessible. This approach, though, clearly interferes with the usability of the information - it’s there but no-one can get to it. Increasing the usability of the information also increases its accessibility, which in turns increases its visibility. The only possible solution to this conundrum is cryptography which, restricts access to the meaning of data without restricting access to the underlying data itself. This is done in one of many ways which are beyond the scope of this paper. EPFL is lucky to have a world-leading cryptographer, Arjen Lenstra, running ISIS, EPFL’s new information security center.

To summarize things so far, information security and availability must be measured according to four measures :

  1. confidentiality
    The necessity that information be available only to authorized parties.
  2. availability
    The ability to reach the information.
  3. traceability
    The ability to ascertain who has had access to the information.
  4. integrity
    The ability to ascertain that information hasn’t been tempered with.

Of course as we alluded to above these measures are not necessarily independent and trade-offs are in practice necessary - e.g. making information readily available to authorized parties (high information availability) may be difficult to reconcile with information confidentiality (i.e. restricting the information to parties authorized to access it). The art of IT risk management consists largely of reaching an optimum conjunction of those four dimensions of assessment for a given purpose. The multiplicity of those dimensions of assessment makes the management of risk in the IT world particularly challenging from a technical as well as from an organizational standpoints.

Why IT risk is different

Risk management has become a standard tool in corporate finance, taught to all quantitative analysts and now largely formalized into a set of standard, accepted tools. One of the enablers of this state of facts has been the emergence of a proposed methodology, Riskmetrics, in the mid-1990’s, based on but not identical to that used in a large, complex bank, J.P. Morgan. Riskmetrics was based on readily available data, i.e. prices and correlations between the movements between those prices. Moreover, the methodology was published openly, as were basic data sets that allowed to perform many risk calculations easily and cheaply. This was a revolution. Suddenly the notion of risk had been properly defined, and a systematic, formal and realistic way to calculate it, proposed. The methodology - or rather the family of methodologies so-proposed - was so widely adopted that it was later formalized into law through the mechanism of Basel Committee of the Bank for International Settlements which regulates banking activity in member countries (of which most large powers are). From that point on, it became mandatory for banks to calculate and report their risk as per the accepted framework and set solvency standards against those calculated risks.

One could have hoped for a similar phenomenon in IT risk, but this has not yet been the case. The reasons why this is so are multiple :

The first - and perhaps most complicated - issue is that data on the operation of the network has in the past been generally very difficult to obtain. Since there are no accepted models on measuring IT risk, no-one knows exactly what kind of data to collect, and whenever data is collected, it tends to be done in a proprietary, closed, roughshod fashion, that makes the data very di¢ cult toand analyze it in a useful way. The limited availability and usability of the data has further hindered its analysis, preventing the emergence of acceptable models providing further insight of what to look for in the data that’s collected, and therefore the further acquisition of meaningful data.

Beyond the technical issues, however, the lack of a regulatory framework to enforce the recognition of IT risk have largely precluded the dissemination of event-level data (i.e. reports about actual intrusions or other IT anomalies) about IT issues, which in turns has lead to IT risk’s being highly underestimated. Few adverse IT events are publicized, and certainly loss-level information has not been.

A further challenge to IT risk management has been the di¢ culty in valuing the underlying commodity whose loss is being measured - information. While financial markets are very good at pricing (rightly or wrongly) assets and debts, information is by virtue of its nature difficult to value, precisely because this value is multidimensional. A piece of information may have value because of its confidentiality (an access code to a bank account that allows its holder full and uncontrolled access to the account), or rather because of its universal accessibility i.e. a phone number. So the information is more difficult to value than an asset which is traded on an open market and whose price is therefore readily observable. So perhaps the easiest way to price information is simply to measure the losses stemming when any of its attributes are compromised. The problem with this question is, however, that again, IT-related losses are either hugely under-reported or not sufficiently understood to carry out a meaningful forensic analysis that would allow the breakdown of the loss into its components. In other words, the valuation of information, and therefore that of the infrastructure that supports it, is a complex endeavor, whose pursuit will require development both in the technological and societal realms (i.e. how do we get the information we need, from whom, etc.?).

Where do we go from here ?

Ultimately, we’ve established that IT risk is hard. What we don’t know yet is whether there’s anything we can do about it. The answer is yes ! There are things we can and should do. The first such thing is to propose a realistic model of how that risk emerges, and how it propagates. The model needs to be the result of a cooperative endeavor between several parties :

  1. Policy and decision makers to formulate the right incentives for the principals to collaborate and determine the governance requirements.
  2. Technical experts to understand the infrastructure and its connections and to generate the data.
  3. Economists, econometricians or financial engineers to build the information valuation framework.
  4. Public organizations to endorse and promulgate the methodology.

We then need to propose in agreement with all interested parties a framework to model this risk, and drawing from technical experts, obtain actual data flow and matching economic transaction information. We then need to test the model against the data until and adjust the former until it agrees with the latter. We then need to create correct incentives to manage this risk optimally for society at large. Such an effort is what ISIS at EPFL is about. It is a forum to unite experts from all divisions of the business process to understand the critical questions affecting IT risk, and how to manage it. Our purpose is to foster interdisciplinary discussion in order to establish a common framework for IT risk, to help establish this framework, and to test it against actual experimental data, in the spirit of good science and engineering. We are fortunate at EPFL to have experts in many of the subject areas needed to accomplish this ambitious task, and the ability to draw on considerable external resources -both public and corporate- to move forward. We look forward to the considerable work ahead to address some of the issues spelled out above.



Cherchez ...

- dans tous les Flash informatique
(entre 1986 et 2001: seulement sur les titres et auteurs)
- par mot-clé

Avertissement

Cette page est un article d'une publication de l'EPFL.
Le contenu et certains liens ne sont peut-être plus d'actualité.

Responsabilité

Les articles n'engagent que leurs auteurs, sauf ceux qui concernent de façon évidente des prestations officielles (sous la responsabilité du DIT ou d'autres entités). Toute reproduction, même partielle, n'est autorisée qu'avec l'accord de la rédaction et des auteurs.


Archives sur clé USB

Le Flash informatique ne paraîtra plus. Le dernier numéro est daté de décembre 2013.

Taguage des articles

Depuis 2010, pour aider le lecteur, les articles sont taggués:
  •   tout public
    que vous soyiez utilisateur occasionnel du PC familial, ou bien simplement propriétaire d'un iPhone, lisez l'article marqué tout public, vous y apprendrez plein de choses qui vous permettront de mieux appréhender ces technologies qui envahissent votre quotidien
  •   public averti
    l'article parle de concepts techniques, mais à la portée de toute personne intéressée par les dessous des nouvelles technologies
  •   expert
    le sujet abordé n'intéresse que peu de lecteurs, mais ceux-là seront ravis d'approfondir un thème, d'en savoir plus sur un nouveau langage.