In this blog will differentiate between a Proactive approach and a Reactive approach to software performance issues.
• You investigate the performance only if you face performance problems after design & coding to avoid premature optimization .
• Your bet is that you need to tune & scale vertically (buying faster/more expensive hardware, more clouds-resources). You experience increased hardware expense / total cost of ownership.
• Performance problems are frequently introduced early in the design and cannot always be fixed through tuning or more efficient coding. Also, fixing architectural / design issues later in the cycle very expensive nor always possible.
• You generally cannot tune a poorly designed system to perform as well as a system that was well designed from the start.
• You incorporate performance modelling and validation since the early design .
• Iteratively you test your assumption / design decision by prototyping and validating the performance for that design (e.g. Hibernate vs iBatis)
• Evaluate you r tradeoffs of performance/scalability with other QoS (data integrity, security, availability, manageability) since the design phase.
• You know where to focus your optimization efforts
• You decrease the need to tune and redesign ; therefore, you save money .
• You can save money with less expensive hardware or less frequent hardware upgrades.
• You have reduced operational costs .
Performance modelling process
1.Identify key scenarios (uses cases with specific performance requirement/SLA, frequently executed, consume significant system resources, run in parallel)
2. Identify workload (e.g. total concurrent users, data volume)
3. Identify performance objectives (e.g. response time, throughout, resource utilization)
4. Identify budget (max processing time, server timeout, CPU utilization percent, memory MB, disk I/O, network I/O Mbps utilization, number of database connections, hardware & license cost)
5. Identify processing steps for each scenarios (e.g. order submit, validate, database processing, response to user)
For each steps:
- Allocate budget
- Evaluate (by prototyping and testing/measuring): Does the budget meet the objective? Are the requirement & budget realistic? Do you need to modify design / deployment topology?
- Validate your model.
Performance Model Document
• Performance objectives.
• Itemized scenarios with goals.
• Test cases with goals.
Use risk driven agile architecture
First, prototype and test the most risky areas (e.g. unfamiliar technologies, strong requirement in SLA). The result will guide your next design step. Repeat the past test again ( regression test ) in the next spirals for example using continous integration. When you address the most risky areas first, you still have more breath looking for alternatives or renegotiate with the customers in case of problems.