You were invited by the university president to prepare an IS plan for the university, discuss what are the steps in order to expedite the implementation of the IS Plan. (at least 5000 words)
When we talk about IS plan, we all know that it is Information System Planning. And IS plan is a process for developing a strategy and plans for aligning information systems with the business strategies of an organization. It is the step-by-step plan to be able to meet the needs of an organization and to reach its goal and objectives. Many enterprises do not have model-based information systems development environments that allow system designers to see the benefits of rearranging an information systems development schedule. Consequently, the questions that cannot be answered include:
-What effect will there be on the overall schedule if an information system is purchased versus developed?
-At what point does it pay to hire an abnormal quantity of contract staff to advance a schedule?
-What is the long term benefit from 4GL versus 3GL?
-Is it better to generate 3GL than to generate/use a 4GL?
-What are the real costs of distributed software development over centralized development?
If these questions were transformed and applied to any other component of a business (e.g., accounting, manufacturing, distribution and marketing), and remained unanswered, that unit's manager would surely be fired! We not only need answers to these questions NOW!, we also need them quickly, cost effectively, and in a form that they can be modeled and changed in response to unfolding realities. This paper provides a brief review of a successful 10-step strategy that answers these questions.
Too many half-billion dollar organizations have only a vague notion of the names and interactions of the existing and under development information systems. Whenever they need to know, a meeting is held among the critical few, an inventory is taken, interactions confirmed, and accomplishment schedules are updated.
This ad hoc information systems plan was possible only because all design and development was centralized, the only computer was a main-frame, and the past was acceptable prologue because budgets were ever increasing, schedules always slipping, and information was not yet part of the corporation's critical edge.
Well, today is different, really different! Budgets are decreasing, and slipped schedules are being cited as preventing business alternatives. Confounding the computing environment are different operating systems, DBMSs, development tools, telecommunications (LAN, WAN, Intra-, Inter-, and Extra-net), and distributed hard- and software.
Rather than having centralized, long-range planning and management activities that address these problems, today's business units are using readily available tools to design and build ad hoc stop-gap solutions. These ad hoc systems not only do not interconnect, support common semantics, or provide synchronized views of critical corporate policy, they are soon to form the almost impossible to comprehend confusion of systems and data from which systems order and semantic harmony must spring.
Not only has the computing landscape become profoundly different and more difficult to comprehend, the need for just the right--and correct--information at just the right time is escalating. Late or wrong information is worse than no information.
Information systems managers need a model of their information systems environment. A model that is malleable. As new requirements are discovered, budgets modified, new hardware/software introduced, this model must be such that it can reconstitute the information systems plan in a timely and efficient manner.
Characteristics of a Quality ISP
A quality ISP must exhibit five distinct characteristics before it is useful. These five are presented in the table that follows.
1.)Timely - The ISP must be timely. An ISP that is created long after it is needed is useless. In almost all cases, it makes no sense to take longer to plan work than to perform the work planned.
2.)Usable - The ISP must be useable. It must be so for all the projects as well as for each project. The ISP should exist in sections that once adopted can be parceled out to project managers and immediately started.
3.)Maintainable - The ISP must be maintainable. New business opportunities, new computers, business mergers, etc. all affect the ISP. The ISP must support quick changes to the estimates, technologies employed, and possibly even to the fundamental project sequences. Once these changes are accomplished, the new ISP should be just a few computer program executions away.
4.)Quality - While the ISP must be a quality product, no ISP is ever perfect on the first try. As the ISP is executed, the metrics employed to derive the individual project estimates become refined as a consequence of new hardware technologies, code generators, techniques, or faster working staff. As these changes occur, their effects should be installable into the data that supports ISP computation. In short, the ISP is a living document. It should be updated with every technology event, and certainly no less often than quarterly.
5.)Reproducible - The ISP must be reproducible. That is, when its development activities are performed by any other staff, the ISP produced should essentially be the same. The ISP should not significantly vary by staff assigned.
Whenever a proposal for the development of an ISP is created it must be assessed against these five characteristics. If any fail or not addressed in an optimum way, the entire set of funds for the development of an ISP is risked.
The information systems plan is the plan by which databases and information systems of the enterprise are accomplished in a timely manner. A key facility through which the ISP obtains its Adata@ is the meta data repository. Their role within an organization perform functions in the accomplishment of enterprise missions, they have information needs. These information needs reflect the state of certain enterprise resources such as finance, people, and products that are known to the enterprises. The states are created through business information systems and databases.
The majority of the meta data employed to develop the ISP resides in the meta entities supporting the enterprise=s resource life cycles , the databases and information systems, and project management.
The ISP Development Steps
The information systems plan project determines the sequence for implementing specific information systems. The goal of the strategy is to deliver the most valuable business information at the earliest time possible in the most cost-effective manner.
The end product of the information systems project is an information systems plan (ISP). Once deployed, the information systems department can implement the plan with confidence that they are doing the correct information systems project at the right time and in the right sequence. The focus of the ISP is not one information system but the entire suite of information systems for the enterprise. Once developed, each identified information system is seen in context with all other information systems within the enterprise.
1.)Create the mission mode - The mission model, generally shorter than 30 pages presents end-result characterizations of the essential raison d=etre of the enterprise. Missions are strategic, long range, and a-political because they are stripped of the Awho@ and the Ahow.@
2.)Develop a high-level data model - The high level data model is created in two steps: building database domains, and creating database objects. It is critical to state that the objective of this step is the high-level data model. The goal is NOT to create a low level or fully attributed data model. The reasons that only a high-level data model is needed is straight forward:1.)No database projects are being accomplished, hence no detailed data modeling is required,2.)The goal of the ISP is to identify and resource allocate projects including database projects and for that goal, entity identification, naming and brief definitions is all that is required for estimating. The message is simple: any money or resources expended in developing a detailed data model is wasted.The high-level data model is an Entity Relationship diagram created to meet the data needs of the mission descriptions. No attributes or keys are created. 2.1..) Create Database Domains- Database domains are created from the “bottom” leaves of the mission description texts. There are two cases to consider. First, if the mission description’s bottom leaves are very detailed, they can be considered as having being transformed into database domains. That is they will consist of lists of nouns within simple sentences. The other case is that the mission descriptions have been defined to only a few levels, and the lists of nouns that would result from the development of database domains have yet to be uncovered. A series of diagraming techniques created especially for data and the relationships among data is called entity-relationship (ER) diagraming. Within one style of this technique, the entities are drawn as rectangles and the relationships are drawn as diamonds. The name of the relationship is inside the diamond. Another style of ER modeling is to just have named lines between the entities. In this methodology, since the domain of the diagram is data, it is called the database domain diagram. The purpose of the database domain diagram is not to be precise and exacting but to be comprehensive. The goal is to have the reviewer say, “that's just the right kind of data needed to satisfy the required mission description.” When all the database domain diagrams are created, siblings are combined. Entities that are named the same are not presumed to be the same. Analysis must show that to be true. If not, one or both of the entities must have their name and definition changed. As the sets of sibling diagrams are merged from lower to higher levels, the quantity of commonly named entities on different diagrams diminishes. Diagram merger becomes optional when the use analysis of a common entity is subject to update (add, delete, or modify) in one diagram and is only referenced (read) in another diagram.2.2.) Define Database Objects - In today's parlance, a lucid policy-procedure pair is called a business object. When the policy procedurepair are completely defined within the language constructs of ANSI/SQL and is stored, retrieved, and maintained in an ANSI/SQL database through a sequence of well-defined states, the business object is a database object. The goal of database object analysis is to enable the definition of both the data structure and the data structure transformations that: a.)Installs a new database object in the database b.)Transforms a database object from one coherent state to another c.) Removes a database object from the database.Database objects are found by researching business policies and procedures. Database objects are however much more than just collections of policy-homogeneous entities. In fact database objects consist of four main parts: 1.Data Structure: the set of data structures that map onto the different value sets for real world database objects such as an auto accident, vehicle and emergency medicine incident. 2.Process: the set of database object processes that enforce the integrity of data structure fields, references between database objects and actions among contained data structure segments, the proper computer-based rules governing data structure segment insertion, modification, and deletion. For example, the proper and complete storage of an auto accident.3. Information System: the set of specifications that control, sequence, and iterate the execution of various database object processes that cause changes in database object states to achieve specific value-based states in conformance to the requirements of business policies. For example, the reception and database posting of data from business information system activities (screens, data edits, storage, interim reports, etc.) that accomplish entry of the auto accident information. 4. State: The value states of a database object that represent the after-state of the successful accomplishment of one or more recognizable business events. Examples of business events are auto accident initiation, involved vehicle entry, involved person entry, and auto accident DUI (driving under the influence of alcohol/drugs) involvement. Database object state changes are initiated through named business events that are contained in business functions. The business function, auto accident investigation includes the business event, auto-accident incident initiation, which in turn causes the incident initiation database object information system to execute, which in turn causes several database object processes to cause the auto accident incident to be materialized in the database.A database object is specified to the SQL DBMS through the SQL definition language (DDL). All four components of a database object operate within the “firewall” of the DBMS. This ensures that database objects are protected from improper access or manipulation by 3GLs, or 4GLs. A DBMS that only defines, instantiates, and manipulates two dimensional data structures is merely a simplified functional subset of the DBMS that defines, instantiates, and manipulates database objects. Database objects are completely defined within the database object column of the Knowledge Worker Framework. They are interfaced to the “outside world” by means of business information systems through SQL views. Each view represents the entire set of data, or some subset of a set of data that truly reflects a known value state of the database object. Culling out the database objects from 600 or so entities requires three simple questions:a.)Does the entity represent only a single value? For example, when the entity, Salary is really a business fact, it should be represented in the metabase as a data element, b.)Does the entity represent a collection of business facts from withing another context? For example, when the entity, Critical Contract Dates, represents multiple business facts, but within the context of the contract, the entity is a property class, and is stored in the metabase as such, c.)Does the entity represent multiple collections of business facts and is self-contained as to context? For example, when the entity, Contract, contains multiple property classes such as critical dates, signatories to the contract, terms and conditions, items and item quantities, and the like, the entity is a database object and is stored in the metabase as such.
3.)Create the resource life cycles (RLC) and their nodes - Resources are drawn from both the mission descriptions and the high level data model. Resources and their life cycles are the names, descriptions and life cycles of the critical assets of the enterprise, which, when exercised achieve one or more aspect of the missions. Each enterprise resource Alives@ through its resource life cycle. A mission might be human resource management, where in, the best and most cost effective staff is determined, acquired and managed. A database object squarely based on human resources would be employee. Within the database object, employee, are all the data structures,procedures, integrity constraints, table and database object procedures necessary to “move” the employee database object through its many policy-determined states. A resource might also be named employee, and would set out for the employee resource the life cycle stages that reflect the employee resource’s “journey” through the enterprise. While an enterprise may have 50 to 150 database objects, there are seldom more than 20 resources. Enterprises build databases and business information systems around the achievement of the life cycle states of its resources. Business information systems execute in support of a particular life cycle stage of a resource (e.g., employee promotion). These information systems cause the databases to change value-state of contained database objects to correctly reflect the resource’s changed state. The state of one or more database objects in the database is the proof that the resource’s state has been achieved. Resources become the lattice work against which database and business information systems are allocated. The table that follows presents the basic components of resources and their life cycles.
Resources and Resource Life Cycles
Resource - A resource is an enduring asset with value to the enterprise
Resource Life Cycle - A resource life cycle is the linear identification of the major states that must exist within life of the resource. The life cycle of a resource represents the resource’s "cradle to grave" set of state changes.
Precedence Vector - A precedence vector is a relationship between two nodes of different RLCs that indicates that the Target RLC node is enabled in some significant way by the Source RLC node.
RLC Matrix - The RLC matrix is the set of all resources, their life cycles and the precedence vectors among the nodes. Properly drawn the RLC Matrix resembles a PERT chart.
The ultimate goal of resource life cycle analysis is the identification and description of the major resources essential to the enterprise’s survival, and the ultimate goal of the ISP is the identification and accomplishment sequencing of the information systems projects required to implement the enterprise resources in the most effective manner possible.
3.1.)Determine the Resources - The enterprise’s product and/or service resources are defined; they may be either concrete or abstract. Ron Ross provides two guidelines to assist in resource identification: 1.Define the product or service that constitute the enterprise’s resources from the customer perspective.2. Define the resource as it is managed between the enterprise and its customers.
Characteristics of a resource are:
Basic - The resource must exist for the enterprise to exist
Complex - The resource requires development and management
Valuable - The resource must be protected, exploited, and/or leveraged by the enterprise.
Enduring - The resource exists beyond business cycles
Shareable - The resource is shared by different functions of the enterprise.
Structured - The resource can be described and organized
Centralized - The resource can be controlled and monitored centrally, even if distributed in creation or use.
Additional tests for resources are:
a.)The resource must be monitored and forecasted. By the time the resource is required, it is too late to be produced.
b.)The resource must be optimized. The resource is of such a cost that an unlimited supply is not possible.
c.)The resource must be controlled and allocated. The resource is desirable and necessary, and must be shared among functions of the enterprise.
d.)The resource must be tracked. Each stage of the resource is important to the enterprise, including its demise.
3.2Determine The Resource Life Cycles - The second step is to determine a life cycle for each resource. Each node in the life cycle represents a major state change in the resource. The state change is accomplished by business information systems and is reflected through the enterprise’s database objects (conformed into databases). The three figures below, developed in support of an enterprise database project for a state-wide court information system, shows the resource life cycles for Document, Case, and for Court’s Personnel.
4.)Allocate precedence vectors among RLC nodes - Tied together into a enablement network, the resulting resource life cycle network forms a framework of enterprise=s assets that represent an order and set of inter-resource relationships. The enterprise Alives@ through its resource life cycle network. A precedence between resources is created when a resource life cycle state, that is, a specific life cycle node, cannot be effective or correctly done unless the preceding resource life cycle state has been established or completed. A precedence arrow, renamed precedence vector, is drawn from the enabling resource life cycle state to the enabled resource life cycle state. The most difficult problem in establishing the precedence is the mind set of the analyst. The life cycle is not viewed in operational order, but in enablement order: that is, what resource life cycle state must exist before the next resource life cycle state is able to occur. This is a difficult mind set to acquire, as there is a natural tendency to view the life cycle in operational order. The test of precedence becomes: what enables what, and what is it enabled by what? For example, project establishment precedes the award of a contract. This does not seem natural, since a project would not operationally begin until after a contract is awarded. However, there must be an established infrastructure to create the project and to perform the work prior to the contract award. A workforce must be in place to perform work along with the ability to assign work to the employee on the contract, and the ability to bill the customer. Therefore, the project enables the contract. There are three possible meanings for enablement. That is, a resource life cycle state precedes another resource life cycle state because: 1. The accomplishment of the preceding resource life cycle state saves money. 2. The resource life cycle state leads to rapid development of another resource life cycle state 3. The resource life cycle state permits faster, more convenient accomplishment of another resource life cycle state.If one or more indicators exists, then a precedence vector should be created. Two alternatives exist relative to the existence of the enterprise: newly established or existing. Experience shows the preferred perspective is that of an already-existing enterprise. RLC states may or may not occur during a life cycle, or events may occur in parallel. For example, an employee may receive an award, but then again, may never receive an award. An employee may work before and after a security clearance is granted. The strategy to deal with parallel or optional RLC states is to create a single stream of RLC states in which none are parallel or optional by “pushing down” the parallel or optional RLC states to a lower level.
5.)Allocate existing information systems and databases to the RLC nodes - The resource life cycle network presents a Alattice-work@onto which the Aas is@ business information systems and databases can be Aattached.@ See for example, the meta model in Figure 2. The Ato-be@ databases and information systems are similarly attached. ADifference projects@ between the Aas-is@ and the Ato-be@ are then formulated. Achievement of all the difference projects is the achievement of the Information Systems Plan. Once the resource life cycle network has been created, it is stored into the metabase. Once there, its lattice can be employed to attach the databases and business information systems. Databases and their business information systems exist within a data architecture framework. The five distinct classes of databases are:a.)Original data capture (ODC) b.)Transaction data staging area (TDSA) c.)Subject area databases (SDB) d.)Data warehouses (wholesale and retail (a.k.a. data marts)) e.)Reference data
Most resource life cycle nodes contain at least one original data capture database application. The data from these ODC databases should be pushed to their respective TDSA databases. Once there, various subject area databases pull the data to build the longitudinal and broad subject area databases. It is likely that there is one subject area database for one or more resources. Data from the subject area databases, also called operational data stores by Bill Inmon, is again pulled to create one or more data warehouse databases. Most databases employ one or more reference data tables as standard semantics for selection, control-breaks and printing. Databases and business information systems exist in two forms: “as-is” and “to-be.” An “as-is” database or information system, as it’s characteristic implies, represents the existing state of the information technology assets. A “to-be” database or information system is a proposal for some technology improvement, functional enhancement, or an under-way project effort. 5.1 Allocate Existing (As-is) Databases or Files to Resource Life Cycle Nodes - Within the class of existing databases or files, there are three prototypical examples:a.)A file for every distinct process or purpose, b.)A single database for all reasons, c.)Multi-data architecture database classes.
Knowledge about these existing set of databases and files should already reside in the metabase. If their metadata is not in the metabase, these databases and files must be discovered. A good way is to research all the reports produced by the information systems department and allocate the file that was employed to produce the report to the RLC node that best fits the representation of the data. Once all the databases and files are allocated, reports can be produced by the metabase that show RLC nodes that have a “bountiful” quantity of databases and files (not a good sign). and those that have no allocated databases or files (also not a good sign). In the later case, there probably are databases and files but they are either “private” or undiscovered. Either case is “not a good sign.”sign.” In any case, allocating them to resource life cycle nodes is a matter of distilling the intended purpose of the database or file and then creating the relationship. It is likely that some files or databases will allocate to multiple nodes and even to different nodes of different life cycles. The quality of mapping relationships is inversely proportional to the encapsulation of the data to the resource life cycle node. For ODC databases or files, there should be few multi-node mappings. For data warehouse databases there will probably be many multi-node mappings.
6.)Allocate standard work break down structures (WBS) to each RLC node - Detailed planning of the Adifference projects@ entails allocating the appropriate canned work breakdown structures and metrics. Employing WBS and metrics from a comprehensive methodology supports project management standardization, repeatability, and self-learning.
7.)Load resources into each WBS node - Once the resources are determined, these are loaded into the project management meta entities of the meta data repository, that is, metrics, project, work plan and deliverables. The meta entities are those inferred by Figure 2.
8.)Schedule the RLC nodes through a project management package facilities. - The entire suite of projects is then scheduled on an enterprise-wide basis. The PERT chart used by project management is the APERT@ chart represented by the Resource Life Cycle enablement network.
9.)Produce and review of the ISP - The scheduled result is predicable: Too long, too costly, and too ambitious. At that point, the real work starts: paring down the suite of projects to a realistic set within time and budget. Because of the meta data environment (see Figure 1), the integrated project management meta data (see Figure 2), and because all projects are configured against fundamental business-rationale based designs, the results of the inevitable trade-offs can be set against business basics. Although the process is painful, the results can be justified and rationalized.
10.)Execute and adjust the ISP through time - As the ISP is set into execution, technology changes occur that affect resource loadings. In this case, only steps 6-9 need to be repeated. As work progresses, the underlying meta data built or used in steps 1-5 will also change. Because a quality ISP is Aautomated@ the recasting of the ISP should only take a week or less.
Collectively, the first nine steps take about 5000 staff hours, or about $500,000. Compared to an IS budget $15-35 million, that's only about 3.0% to 1.0%.
If the pundits are to be believed, that is, that the right information at the right time is the competitive edge, then paying for an information systems plan that is accurate, repeatable, and reliable is a small price indeed.
IT projects are accomplished within distinct development environments. The two most common are: discrete project and release. The discrete project environment is typified by completely encapsulated projects accomplished through a water-fall methodology.
In release environments, there are a number of different projects underway by different organizations and staff of varying skill levels. Once a large number of projects are underway, the ability of the enterprise to know about and manage all the different projects degrades rapidly. That is because the project management environment has been transformed from discrete encapsulated projects into a continuous flow process of product or functionality improvements that are released on a set time schedule. Figure 3 illustrates the continuous flow process environment that supports releases. The continuous flow process environment is characterized by:
Multiple, concurrent, but differently scheduled projects against the same enterprise resource
Single projects that affect multiple enterprise resources
Projects that develop completely new capabilities, or changes to existing capabilities within enterprise resources
It is precisely because enterprises have transformed themselves from a project to a release environment that information systems plans that can be created, evolved, and maintained on an enterprise-wide basis are essential.
There are four major sets of activities within the continuous flow process environment. The user/client is represented at the top in the small rectangular box. Each of the ellipses represents an activity targeted to a specific need. The four basic needs are: a.)Need Identification b.)Need Assessment c.)Design d.)Deployment
Specification and impact analysis is represented through the left two processes. Implementation design and accomplishment is represented by the right two processes. Two key characteristics should be immediately apparent. First, unlike the water-fall approach, the activities do not flow one to the other. They are disjoint. In fact, they may be done by different teams, on different time schedules, and involve different quantities of products under management. In short, these four activities are independent one from the other. Their only interdependence is through the meta data repository.
The second characteristic flows from the first. Because these four activities are independent one from the other, the enterprise evolves by means of releases rather than through whole systems. If it evolved through whole systems, then the four activities would be connected either in a waterfall or a spiral approach, and the enterprise would be evolving through major upgrades to encapsulated functionality within specific business resources. In contrast, the release approach causes coordinated sets of changes to multiple business resources to be placed into production. This causes simultaneous, enterprise-wide capability upgrades across multiple business resources.
Through this continuous-flow process, several unique features are present:
All four processes are concurrently executing.
Changes to enterprise resources occur in unison, periodically, and in a very controlled manner.
The meta data repository is always contains all the enterprise resource specifications: current or planned. Simply put, if an enterprise resource semantic is not within the meta data repository, it is not enterprise policy.
All changes are planned, scheduled, measured, and subject to auditing, accounting, and traceability.
All documentation of all types is generated from the meta data repository.
In summary, any technique employed to achieve an ISP must be accomplishable with less than 3% of the IT budget. Additionally, it must be timely, useable, maintainable, able to be iterated into a quality product, and reproducible. IT organizations, once they have completed their initial set of databases and business information systems will find themselves transformed from a project to a release environment.The continuous flow environment then becomes the only viable alternative for moving the enterprise forward. It is precisely because of the release environment that enterprise-wide information systems plans that can be created, evolved, and maintained are essential.
Ref: http://www.tdan.com/view-articles/5262
ref: http://www.clarionmag.com/cmag/v3/informationsystemsplanning.pdf
Chrome Game: Fnf Background Week 1 [Play Now]
3 years ago
0 comments:
Post a Comment