Like every simple or complex system, an IT-Infrastructure does a work and this work can be measured.
Infrastructure's work it's to handle informations and give them to users.
Informations are transported by physical connections (the network connections) after being created by a system (the source of informations).
Information can be created directly by an IT-Infrastructure' component (a server) or can be simply forwarded from an external source (a remote server that belongs to another remote IT-Infrastructure).
Informations can reach the user (destination), trhough a device that make informatsions readable/audible/perceptible (i.e. terminal, display, computer, a transductor).
Roughly, an infrastructure can be measured in its performances, by giving the difference of time between information availability and information readability.
IT-Infrastructure administrator's due is to ensure correct performances, giving informations availability to all users of the IT-Infrastructure.
Models are deeply descripted in many official documents. One of those could be the ITIL standard, wich description we encourage to read for further informations.
A simplified view can be made and we do, at first.
All components of IT-Infrastructure are engaged in information manipulation and affects overall performances due to their characteristics.
In an IT environment, typically, can rely many users (even hundreds or more) and many sources.
Sometimes whe can have only a source (a server or a device forwarding informations from one or more external sources like a router).
There are even infrastructures where informations doesn't have pre-defined information's source (i.e. systems limited in this specific role) in a flat-model information's flow.
The simplest IT-Infrastructure model (atomic) is the "single-server/single-client" model, like this:
server ----> physical_connection ----> network device ----> physical_connection ----> client
Components and their influence
Every IT-Infrastructure component affects overall performances in reason of his interaction model.
We can say something about components and performance impact, as we see below.
A Server is a system with a specific role of service-provider (input/output of content or content-related data).
Servers can do their work using their resources, spending some time to do this.
Some operations (like file-transfer or file access) are strictly capacity-related.
Other services can need some complex-processing where response time it's not easily predictable or can't be at all.
Sometimes network interface's speed can't affect overall response time, due to processing time or information access time (disk).
Key factors on performances are:
- network interface speed
- computing capacity needed in Information providing
- disk transfer rate
- multi-user capability
Obviously, every fault condition or degrade-causing condition can affect performances, aspect that promotes preventive diagnostic and corrective activities, before any other performance measurement activity.
Physical connections (Wirings)
They affects performances due their connection quality, strictly related to physical characteristics
These devices have performance limits related to their functions.
These are some first-hand considerations about some devices:
- Router - packet transport capability, connection table limits and connection transfer rate
- Switch - packet transport capability, overall transfer rate (processing capability for more-than-two layer switch)
- Bridge - packet transport and processing capability, overall transfer rate
- Firewall - packet transport and processing capability, rule-processing capability (extremely variable)
- Access point - packet transport capability and connection transfer rate
- other network devices - (depends of)
Client perspective is a mirror-view of Server's performances considerations.
Surely client performances are affected by network interface speed, that must be enough to carry desired amount of data from server to single client.
After this, client must be able to execute a fast-enough data-preparation in order to prepare its presentation.
Finally, presentation performances must be enough too.
We can summarize client performance as the lowest performance found adding:
network speed + preparation-speed + presentation-speed
Performances Measurements Details
Performances can be measured in many ways.
- Giving a value or more values from:
massive/average (a value at an second/minute/hour/day/week/monht and so on) peak value (the best) floor value (the wors) average (wheighted or not) value
- Showing a list of values detected periodically or at specific breakpoints as:
text (a data table) graphic (a 2d or 3d graphics, etc.)
Complexity of Performances measurements
Performance measurements are complex operations.
Sometimes performances are difficult to measure, due to a non-standardization problem.
IT Infrastructures are not standardized as far as needed for a common set of measurements.
Too much different topologies, too much different services used in any of existing network.
So, performance measurements doesn't always meet (and cannot meet, sincerely) a standardized set of "reference" measure models.
Lack of standardization produces lack of comparability between values.
Surely there are some "rock-solid" methodologies, like making strong measures of real network activity or trying to simulate user real operations after measuring it.
Embracing these techniques, all performances are detected in a good way and can be simulated with great affinity with real user behaviour.
One of great difficulty in Performance Measurement, its'nt only precision and correctness of methods but regards the connection between numbers and real world!
Many times measurements says "good" and user experience is "not so good", because measurements are far away of real operations of IT-Infrastructure.
This problem makes unreal (and truly far-from-reality) every internal performance tool available on systems and/or devices.
Many are benchmark techniques.
Some are well-defined, others are rough but can be easily executed.
The basic method is based on a segmentation model of the IT-Infrastructure functionalities and/or operative segments.
This method hasn't the necessary precision, but correlates measures with a delimited context of the infrastructure. This permits an easy management of specific segments, highly correlating corrective actions with measurses.
By this methodology, we can combine interventions for hardware, software, network devices, configurations, technologies involved, and subsequent cycles of measures.
Some benchmark tools are:
| This article uses material from the Wikipedia article IT-infrastructure performances, that was deleted or is being discussed for deletion, which is released under the Creative Commons Attribution-ShareAlike 3.0 Unported License.