In a context where application performance determines service continuity and user experience, migration to the cloud demands rigor and expertise.
An international financial institution entrusts Qim info with the resumption of a migration project to AWS, where stability and costs were a problem.
The teams intervene to restore performance, reinforce system reliability and control operating costs.
The priorities defined at the start of the mission:
- Reliability of a critical platform used by several subsidiaries.
- Restore performance in line with business requirements.
- Reduce AWS costs over the long term.
AWS cloud migration: challenges and methods
Migration to the cloud represents a profound transformation of architectures and development practices. It commits the company to a complete modernization of itsinformation system, in order to gain agility, scalability and rapid deployment.
These benefits are based on true performance engineering, capable of aligning technical choices with business objectives. A cloud-native approach, based on modularity, automation and continuous monitoring, enables full exploitation of cloud capabilities while guaranteeing stability and cost control.
With this in mind, Qim info stepped in to redefine the technical strategy for this major project.
AWS cloud migration: key benefits and risks
Migration to the cloud promises essential technical gains for organizations modernizing their mission-critical systems. In this project, these benefits are the customer’s initial objective:
Expected benefits:
- Enhanced security thanks to managed services and cloud-native protection mechanisms.
- Increased reliability thanks to a multi-zone architecture capable of ensuring service continuity.
- Operational flexibility to quickly adjust environments to business needs.
- Instant scalability to absorb load variations without degrading the user experience.
These benefits are only fully realized if the risks often associated with complex architectures and non-optimized deployments are mastered. This is precisely what led to the initial drift of the project before Qim info’s intervention.
Risks identified :
- Higher costs, particularly in the case of inefficient auto-scaling or poorly dimensioned redundancy.
- Performance degradation due to latency, inadequate configuration or non-cloud-native components.
- Fragile stability, especially when environments lack resilience testing or precise parameterization (e.g. Kubernetes).
- Operational drift, accentuated by the absence of continuous supervision and reliable warning mechanisms.
To secure the benefits and reduce these risks, Qim info teams apply a methodical approach: load testing, advanced tuning, cloud-native adaptation of application components and implementation of comprehensive supervision. This work paves the way for restoring the performance, stability and budgetary control achieved at the end of the mission.

Banking cloud architecture: a demanding context
The customer, an international banking group that ranks among the 50 largest financial institutions in the world, undertook a complete modernization of its wealth management system by initiating a migration to AWS. This change of infrastructure is designed to reinforce the reliability of the application, speed up processing and support the continuing increase in business volume, thanks to a more scalable architecture.
Eighteen months into the project, however, the platform is showing signs of fragility: high response times, occasional interruptions and operating costs in excess of five million euros a year. The work involved rebuilding an old on-premise solution, whose technological gap with current standards made upgrading impossible. This complete overhaul, combined with a complex distributed architecture, naturally increased the technical challenges to be overcome.
The ecosystem is based on a variety of technologies:
- Java for application logic and microservices.
- Oracle 19c for transactional data management.
- ActiveMQ for inter-departmental exchanges.
- Red Hat Enterprise Linux for server infrastructure.
- Kubernetes for container orchestration.
- Microsoft Azure for multiple systems connected to the application core.
It is in this demanding context that Qim info is called upon to secure the architecture, resolve persistent malfunctions and optimize resources in order to restore the expected level of performance and stability.
To find out more about cloud architecture, read our article : Cloud architecture: 5 minutes to understand it all.
Qim info approach: performance, stability, costs
The intervention is based on a proven three-stage approach, designed to restore performance, reinforce stability and sustainably control operating costs.
Step 1: Restore performance
Qim info begins by defining a baseline to accurately measure platform behavior:
- Analysis of actual performance on critical transactions.
- Observation of operation via Dynatrace ( Java microservices, Oracle database, Kubernetes, ActiveMQ flows).
A test and supervision capability is then built:
- Creation of load scenarios to reproduce activity peaks.
- Business flow simulation via ActiveMQ.
- Implementation of unified supervision with Dynatrace.
Teams run several optimization cycles to eliminate slowness:
- Adjustment of microservices and APIs.
- Optimization of Oracle database and batch processing.
- Adjustment of Kubernetes containers and JVM (Java Runtime Engine) parameters.
Once the main transactions have been stabilized, Qim info validates the robustness of the system through :
- Advanced business scenarios (demanding processes, critical calculations).
- High-load testing to guarantee resistance in the most demanding situations.
This stage concludes with a structured handover to in-house teams:
- Training developers, QA and DevOps staff in performance best practices.
- Integration of testing in the CI/CD chain.
- Provision of reusable scenarios for future developments.
Step 2: reinforce stability
To secure the platform for the long term, Qim info carried out a detailed analysis of the technical components. Several limitations inherited from the previous environment were highlighted, including obsolete versions of Java, Oracle and the application server, as well as an unsupported Linux system. These elements impact performance and introduce security risks. The entire technical foundation was therefore readjusted to ensure full compatibility with the AWS infrastructure.
Once the upgrade is complete, the team conducts a series of stability tests to verify the resilience of each platform component:
- Network slowdowns.
- CPU or memory overload.
- Crash of Kubernetes pods.
- JVM incidents.
- Server or disk failure.
These simulations validate that the platform remains stable in the event of an incident, and that the alert and self-recovery mechanisms function correctly.
Qim info then extends the tests to scenarios critical to the banking business, notably those linked to the complete order execution chain. The aim is to ensure that messages are never lost, duplicated or corrupted. Among the situations tested are
- Invalid or missing messages.
- Message queues saturated.
- Loss of connection to execution services.
These validations guarantee the robustness of the platform on the most sensitive routes for the business.
Step 3: Control costs
Once performance and stability had been restored, analysis revealed excessive resource consumption: oversized servers, too many pods and excessive CPU/RAM levels. Platform costs then exceeded five million euros per year.
Qim info undertakes in-depth optimization of all components to reduce costs without affecting quality of service:
- Revision of CPU, RAM and JVM limits in Kubernetes to eliminate unused capacity.
- Adjustment of Java parameters to reduce memory footprint without impacting response times.
- Simplify SQL queries and reduce their number to reduce the load on the Oracle database.
- Improved application code and batch processing to limit network and server consumption.
- Global technical review to identify additional long-term savings.
This structured approach reduces operating costs by around a third, while maintaining the same level of performance.
Key technologies for a high-performance platform
The success of the project depends on the mastery of a complete technological ecosystem. Qim info relies on complementary tools to ensure the reliability, supervision and operational efficiency of the cloud platform.
AWS and Kubernetes
These two technologies form the core infrastructure of the project:
- AWS provides the flexibility needed to deploy and adapt environments according to business load.
- Kubernetes orchestrates application containers, automates resource allocation and facilitates disaster recovery.
Qim info adjusts their configurations to ensure an optimum balance between performance, stability and cost control.
Oracle and ActiveMQ
These components ensure the management and fluidity of transactional flows:
- Oracle 19c stores and secures financial data, guaranteeing the integrity and traceability of operations.
- ActiveMQ synchronizes exchanges between microservices and ensures processing continuity.
Qim info reinforces their integration and implements advanced supervision to prevent any failure.
Dynatrace
System monitoring is based on Dynatrace, the benchmark solution for monitoring complex cloud environments.
Thanks to its continuous analysis, the tool measures response times, resource consumption and interactions between different application components.
This complete visibility enables Qim info engineers to design dashboards tailored to customer needs, and quickly identify sources of slowdown.
Internal teams then use this information to anticipate degradations, adjust performance and guarantee stable, continuous operation.
Qim info cloud expertise: governance and performance
Qim info’s added value lies as much in its technical expertise as in its ability to structure complex projects.
The consultants set up a clear governance structure, punctuated by monitoring committees, shared indicators and continuous communication with the customer’s teams. This organization speeds up decision-making and ensures alignment between technical choices and business challenges.
The project mobilizes cloud architects, performance specialists, DevOps engineers and data experts. This multidisciplinary coordination ensures complete control of the project, from initial diagnosis through to operation.
This expertise applies to all critical environments, beyond the banking sector, and supports organizations engaged in strategic cloud projects.
Discover the job of cloud engineer.
Results on AWS
At the end of the mission, the platform fully met performance and availability requirements. Indicators show concrete and lasting improvements:
- The 25 application services meet their service commitments.
- Critical incidents disappear entirely.
- Processing times are halved.
- Operating costs are cut by a third, saving around 400,000 euros a month.
- Continuous supervision by Dynatrace prevents anomalies and maintains a constant level of quality.
The customer now has a stable, high-performance, scalable environment, supported by a trained, autonomous in-house team.
Qim info: AWS cloud partner
With offices in Geneva, Lausanne, Zurich, Basel, Annecy and Lyon, Qim info supports companies in the design, migration andoperation of their cloud environments. Its expertise spans AWS, Azure and GCP, including supervision, application modernization, cost optimization and reliability of mission-critical systems.
The Performance & Observability department brings together over 30 specialists dedicated to performance engineering. In collaboration with Dynatrace, Grafana Labs, Octoperf, Splunk or Cisco, Qim info offers complete visibility of IT environments and accelerates incident detection.
With over 20 years’ experience and 600 employees, Qim info is a benchmark partner for companies seeking innovation, budget control and operational excellence.
Contact with our Performance & Observability experts to assess the performance of your systems and define an optimization strategy tailored to your needs.
Click here to find out more about our cloud migration services.
FAQ on cloud migration
What is cloud migration?
Cloud migration is the process by which a company transfers its IT resources (data, applications, servers) to a cloud computing environment. It replaces rigid and costly local systems with more flexible, scalable and remotely accessible infrastructures. The aim is to improve performance, security and agility, while reducing operating costs.
What are the different types of cloud migration?
Several migration methods are available, depending on the company’s objectives:
- Retaining: keeping certain solutions local, for reasons of compliance or performance.
- Rehosting (Lift & Shift): direct transfer of applications to the cloud, without modification.
- Replatforming: light adaptation to exploit certain cloud functionalities.
- Refactoring: complete redesign of applications to take advantage of native cloud architectures.
- Retiring: removal of tools no longer required during migration.
Who are the most renowned cloud service providers? ?
The main global players are:
- Amazon Web Services (AWS): comprehensive, widely used in all sectors.
- Microsoft Azure: popular with businesses for its integration with the Microsoft ecosystem.
- Google Cloud Platform (GCP): recognized for its performance in data analysis and AI.
What’s the difference between Saas and the cloud? ?
The cloud is a global model for hosting IT services online. It is divided into three main categories: IaaS (Infrastructure), PaaS (Platform) and SaaS (Software).
SaaS, or software as a service, is a type of cloud service. It enables users to access an online application via a browser, without having to install it locally. Google Workspace and Salesforce, for example, are SaaS solutions.
In short, SaaS is part of the cloud, but not everything in the cloud is SaaS.