Our Blog
Category

  • Traditional defect severity and priority is dead! Long live “Go-Live” impact!

    All implementations and releases will Go-Live with defects! That sounds ominous, doesn’t it? For decades software product firms have graded defects / bugs / issues in their applications by severity levels defined by levels of impact to the software product. Just Google defect severity and one finds a multitude of explanations, definitions and meanings. One thing everyone agrees on is that this is a point of conflict between business, QA / testing and software development. Here is what we found on one of those Googling expeditions on defect severity. Having done this in our past avatar for years we don’t deny that this situation is real….and sometimes has even lead to fist fights!

    “Defect Severity is one of the most common causes of feuds between Testers and Developers. A typical situation is where a Tester classifies the Severity of Defect as Critical or Major but the Developer refuses to accept that: He/she believes that the defect is of Minor or Trivial severity. Though we have provided you some guidelines in this article on how to interpret each level of severity, this is still a very subjective matter and chances are high that one will not agree with the definition of the other. You can however lessen the chances of differing opinions in your project by discussing (and documenting, if necessary) what each level of severity means and by agreeing to at least some standards (substantiating with examples, if necessary.)

    ADVICE: Go easy on this touchy defect dimension and good luck!”

    Source: http://softwaretestingfundamentals.com

    The biggest issue with defect severity is that it is usually seen as assigning blame for something not working. Here is what traditional defect severity levels look like:

    1. Critical/Showstopper - An issue in any software that affects critical functionality or feature. This is a state of complete failure of that functionality or feature and can at times include data issues too. These issues typically do not have workarounds and need to be addressed immediately

    2. Major/High - These are issues that are lesser in severity than Severity Level 1 but still affect important functionality. These issues might have complicated workarounds but more often than not need to be addressed before Go-Live

    3. Minor/Medium - These are issues that are lesser in severity than the Severity Level 1 and 2 and may or may not impact important functionality. These issues might have simple workarounds and more often than not do not need to be addressed before Go-Live

    4. Trivial/Low - These are issues that are lowest in severity and do not impact important functionality. These issues have simple workarounds and definitely do not need to be addressed before Go-Live

    Don’t get us wrong, these severity levels are important from a software development perspective. In fact it would be hard to find anyone who disagrees on severity 1 and severity 4 issues. It is severity 2 and 3 defects that cause the most heartburn. Add the pressure of an aggressive project timeline and the thin line between prioritization and defect severity categorization blurs.

    At Go-Live Faster, we extensively work with banks helping them accelerate product launches and releases by making their technology implementations predictable. A typical bank implementation consists of multiple code drops over a 6-12 month period with customization and integrations being delivered in trenches. QA / testing takes place in an incremental manner and defect fixes are delivered in cycles. Some other common characteristics of these implementations are that they are always behind schedule and over-budget.

    Given this situation, these implementations are prime for a constant change in priority, conflict when assigning severity to defects and a last minute scramble to determine "what defects we can live with" and "what defects have to be fixed". We found the traditional severity levels to fall short for this unique situation faced at banks. While creating Go-Live Faster’s suite of Readiness solutions, our teams came up with a simple yet effective way to prioritize Go-Live impacting issues up-front thus leading to better prioritization and consensus across teams. At Go-Live Faster we call it the Go-Live criteria. The Go-Live criteria is broken into four main areas that impact any implementation; i.e.,

    1. High Go-Live Impact - Revenue and regulation impacting

    2. Medium Go-Live Impact - Indirect revenue impacting and reputation impacting

    3. Low Go-Live Impact - Impacts Go-Live but has workarounds or can be fixed once you Go-Live

    4. No Go-Live Impact - Trivial/ low impact defects that need not be fixed even after you Go-Live

    Further, to make it more quantitative, Go-Live Faster provides a Go-Live Score that indicates the overall readiness to Go-Live with an implementation. This table above helps achieve a consensus between LOB, IT and vendor teams. Typically driven by LOB teams, they define what it means to them to have each one of these issues in the system. Once frozen (typically at the start of the project), the IT and vendor teams tag defects based on the Go-Live criteria. This is done in addition to the traditional severity tagging and acts as a substitute for non-scientific prioritization like High, Medium and Low.

    One additional improvement that one can make to the above table is to add user experience impact to the high and medium categories instead of just reputation. However this requires extremely high levels of maturity in a project team as user experience impact is very subjective and means different things to different people.

    Get in touch with us today to understand how to come up with a Go-Live criteria for your next implementation.

    November 17, 2016
  • Top PMO Risks that may Derail your Treasury Management System Implementation

    When it comes to implementing a new treasury management system, banks can be like deer caught in the headlights. There are several unknowns that come into play: Which technology should be used? Will the system align with business priorities? Will the system be implemented within the defined timeline and costs? Will it be able to keep up with new competitors and adapt to new regulations? PMO teams play a significant role in addressing these questions and are pivotal to the success of treasury management system implementations. Although, completely failed implementations are rare, majority of implementations do encounter cost overruns and substantial delays. Post implementation, most banks also realize that there is a gap between the intrinsic value of the treasury technology they chose and their ability to reap benefits by getting it to work effectively.

    Over the years of helping banks and their PMO teams implement treasury management systems, we have realized that project management has significant influence on how an implementation project shapes up. We have outlined a comprehensive list of PMO risks that will help PMO teams pinpoint where their treasury management system implementation may go wrong:

    • Lack of right metrics to control the program

    • Support and availability of integrating teams such as Operations, Host and ARP

    • Less involvement from business teams right from the start

    • Gaps in requirement documentation and interface requirement documentation

    • Schedule slippage

    • Unplanned environment downtime

    • Software Configuration (and Release) Management issues

    • Delay in signing off on requirements / customizations by all stakeholders

    • Defect backlog

    • Business continuity planning and disaster recovery issues

    • Gaps in scheduling of interface development

    • No thought given to knowledge management

    • Lack of clear roles and responsibilities and single points of contact

    • Gaps in communication and reporting protocol

    • Coordination amongst various vendors and SLA management

    • Poor prioritization of customizations, code drops and interface development

    • Poor estimation

    • Scope creep

    • Absence of risk management practices

    • Ineffective quality assurance and quality management

    An experienced PMO team needs to be aware of such PMO risks and should be able to devise mitigation strategies before budget and time creep threatens to derail the project. Product Management and IT teams need to support PMO teams in this. There is a tremendous upside to avoiding these risks. Not only will you be able to ensure that the implementation project meets time and cost goals, you will also be able to rapidly start realizing ROI from the treasury management system. Can you think of any other PMO risks that you may have faced during an implementation?

    August 8, 2016
  • Risks Faced in Transaction Banking Implementations

    Banks have never been under more pressure than now to release more features faster on their transaction banking systems. Faced with competition from peers and all forms of third party processors, the pressure on fee income is being felt across banks. Build vs. buy…to customize or not to…outsource, co-source or in-source…quality vs. stability; this list can go on. The bottom-line is that these are turbulent times for the technology initiatives of most banks.

    A typical implementation of complex transaction banking solution is painful. The average program lasts 18-24 months, takes up majority of the technology program resources and costs millions of dollars. Even with all of this money and time, most banks ‘Go-Live’ with quality, stability, integration and user experience issues. Just QA spends can be in excess of US $3 million+ with at least 3 iterations to the original budget. All of this is not due to a specific product, but because of the nature of the beast. It is evident across products, projects and banks. Here are some implementation risks that we have observed and validated with customers:

    • An average transaction banking system implementation can have anywhere between 6 to 10 vendor code drops depending on the size, complexity, quality and stability.

    • For each vendor code drop, internal development teams will usually match with a code drop of their own.

    • Customers usually plan data migration from one system to another in waves. It’s only in the first wave that you realize there are mismatches and you need to go back to the drawing board.

    • Most product vendors bring a set standard of quality and stability of their applications to the table. Banks tend to over engineer with too many customizations, underestimate the complexity of their programs and are not always ready for projects of this size and complexity.

    • A typical transaction banking system implementation will have a minimum of 3 full rounds of regression testing across the implementation. Smaller selective rounds are also run.

    • Single Points of Failure and managing scope in these projects are some of the primary causes for many cost and schedule overruns.

    • Quality and Stability of these applications are most vulnerable in customizations, integrations, environment and data variables.

    • A typical QA program for implementation of a transaction banking system lasts 18 months and costs approximately US $3 to US $4 Million. This is total cost of ownership that includes all activities such as user migration, data migration, test management etc.

    The cost of finding ‘Go-Live’ impacting issues later in cycles not only increases the cost of fixing them but also could seriously derail your entire implementation program. The same goes for critical missed issues prior to ‘Go-Live’. One major issue could cause you to lose revenue or reputation, get pulled up by the regulator or just result in a really bad user experience.

    A possible solution is an early detection, prioritization and resolution of issues that reduce your Total Cost of Implementation and subsequent release projects. This would involve establishing ‘Go-Live’ criteria upfront and using a combination of analytics, domain experience and application knowledge to carry out an objective assessment of the implementation to determine its readiness to ‘Go-Live’.

    July 1, 2016

If you are looking for someone who can help you accelerate your time to market on product releases look no further. Get in touch with us today to explore our scientific and analytical reports derived from our proprietary technology!

Our Blog
Category

  • Traditional defect severity and priority is dead! Long live “Go-Live” impact!

    All implementations and releases will Go-Live with defects! That sounds ominous, doesn’t it? For decades software product firms have graded defects / bugs / issues in their applications by severity levels defined by levels of impact to the software product. Just Google defect severity and one finds a multitude of explanations, definitions and meanings. One thing everyone agrees on is that this is a point of conflict between business, QA / testing and software development. Here is what we found on one of those Googling expeditions on defect severity. Having done this in our past avatar for years we don’t deny that this situation is real….and sometimes has even lead to fist fights!

    “Defect Severity is one of the most common causes of feuds between Testers and Developers. A typical situation is where a Tester classifies the Severity of Defect as Critical or Major but the Developer refuses to accept that: He/she believes that the defect is of Minor or Trivial severity. Though we have provided you some guidelines in this article on how to interpret each level of severity, this is still a very subjective matter and chances are high that one will not agree with the definition of the other. You can however lessen the chances of differing opinions in your project by discussing (and documenting, if necessary) what each level of severity means and by agreeing to at least some standards (substantiating with examples, if necessary.)

    ADVICE: Go easy on this touchy defect dimension and good luck!”

    Source: http://softwaretestingfundamentals.com

    The biggest issue with defect severity is that it is usually seen as assigning blame for something not working. Here is what traditional defect severity levels look like:

    1. Critical/Showstopper - An issue in any software that affects critical functionality or feature. This is a state of complete failure of that functionality or feature and can at times include data issues too. These issues typically do not have workarounds and need to be addressed immediately

    2. Major/High - These are issues that are lesser in severity than Severity Level 1 but still affect important functionality. These issues might have complicated workarounds but more often than not need to be addressed before Go-Live

    3. Minor/Medium - These are issues that are lesser in severity than the Severity Level 1 and 2 and may or may not impact important functionality. These issues might have simple workarounds and more often than not do not need to be addressed before Go-Live

    4. Trivial/Low - These are issues that are lowest in severity and do not impact important functionality. These issues have simple workarounds and definitely do not need to be addressed before Go-Live

    Don’t get us wrong, these severity levels are important from a software development perspective. In fact it would be hard to find anyone who disagrees on severity 1 and severity 4 issues. It is severity 2 and 3 defects that cause the most heartburn. Add the pressure of an aggressive project timeline and the thin line between prioritization and defect severity categorization blurs.

    At Go-Live Faster, we extensively work with banks helping them accelerate product launches and releases by making their technology implementations predictable. A typical bank implementation consists of multiple code drops over a 6-12 month period with customization and integrations being delivered in trenches. QA / testing takes place in an incremental manner and defect fixes are delivered in cycles. Some other common characteristics of these implementations are that they are always behind schedule and over-budget.

    Given this situation, these implementations are prime for a constant change in priority, conflict when assigning severity to defects and a last minute scramble to determine "what defects we can live with" and "what defects have to be fixed". We found the traditional severity levels to fall short for this unique situation faced at banks. While creating Go-Live Faster’s suite of Readiness solutions, our teams came up with a simple yet effective way to prioritize Go-Live impacting issues up-front thus leading to better prioritization and consensus across teams. At Go-Live Faster we call it the Go-Live criteria. The Go-Live criteria is broken into four main areas that impact any implementation; i.e.,

    1. High Go-Live Impact - Revenue and regulation impacting

    2. Medium Go-Live Impact - Indirect revenue impacting and reputation impacting

    3. Low Go-Live Impact - Impacts Go-Live but has workarounds or can be fixed once you Go-Live

    4. No Go-Live Impact - Trivial/ low impact defects that need not be fixed even after you Go-Live

    Further, to make it more quantitative, Go-Live Faster provides a Go-Live Score that indicates the overall readiness to Go-Live with an implementation. This table above helps achieve a consensus between LOB, IT and vendor teams. Typically driven by LOB teams, they define what it means to them to have each one of these issues in the system. Once frozen (typically at the start of the project), the IT and vendor teams tag defects based on the Go-Live criteria. This is done in addition to the traditional severity tagging and acts as a substitute for non-scientific prioritization like High, Medium and Low.

    One additional improvement that one can make to the above table is to add user experience impact to the high and medium categories instead of just reputation. However this requires extremely high levels of maturity in a project team as user experience impact is very subjective and means different things to different people.

    Get in touch with us today to understand how to come up with a Go-Live criteria for your next implementation.

    November 17, 2016
  • Top PMO Risks that may Derail your Treasury Management System Implementation

    When it comes to implementing a new treasury management system, banks can be like deer caught in the headlights. There are several unknowns that come into play: Which technology should be used? Will the system align with business priorities? Will the system be implemented within the defined timeline and costs? Will it be able to keep up with new competitors and adapt to new regulations? PMO teams play a significant role in addressing these questions and are pivotal to the success of treasury management system implementations. Although, completely failed implementations are rare, majority of implementations do encounter cost overruns and substantial delays. Post implementation, most banks also realize that there is a gap between the intrinsic value of the treasury technology they chose and their ability to reap benefits by getting it to work effectively.

    Over the years of helping banks and their PMO teams implement treasury management systems, we have realized that project management has significant influence on how an implementation project shapes up. We have outlined a comprehensive list of PMO risks that will help PMO teams pinpoint where their treasury management system implementation may go wrong:

    • Lack of right metrics to control the program

    • Support and availability of integrating teams such as Operations, Host and ARP

    • Less involvement from business teams right from the start

    • Gaps in requirement documentation and interface requirement documentation

    • Schedule slippage

    • Unplanned environment downtime

    • Software Configuration (and Release) Management issues

    • Delay in signing off on requirements / customizations by all stakeholders

    • Defect backlog

    • Business continuity planning and disaster recovery issues

    • Gaps in scheduling of interface development

    • No thought given to knowledge management

    • Lack of clear roles and responsibilities and single points of contact

    • Gaps in communication and reporting protocol

    • Coordination amongst various vendors and SLA management

    • Poor prioritization of customizations, code drops and interface development

    • Poor estimation

    • Scope creep

    • Absence of risk management practices

    • Ineffective quality assurance and quality management

    An experienced PMO team needs to be aware of such PMO risks and should be able to devise mitigation strategies before budget and time creep threatens to derail the project. Product Management and IT teams need to support PMO teams in this. There is a tremendous upside to avoiding these risks. Not only will you be able to ensure that the implementation project meets time and cost goals, you will also be able to rapidly start realizing ROI from the treasury management system. Can you think of any other PMO risks that you may have faced during an implementation?

    August 8, 2016
  • Risks Faced in Transaction Banking Implementations

    Banks have never been under more pressure than now to release more features faster on their transaction banking systems. Faced with competition from peers and all forms of third party processors, the pressure on fee income is being felt across banks. Build vs. buy…to customize or not to…outsource, co-source or in-source…quality vs. stability; this list can go on. The bottom-line is that these are turbulent times for the technology initiatives of most banks.

    A typical implementation of complex transaction banking solution is painful. The average program lasts 18-24 months, takes up majority of the technology program resources and costs millions of dollars. Even with all of this money and time, most banks ‘Go-Live’ with quality, stability, integration and user experience issues. Just QA spends can be in excess of US $3 million+ with at least 3 iterations to the original budget. All of this is not due to a specific product, but because of the nature of the beast. It is evident across products, projects and banks. Here are some implementation risks that we have observed and validated with customers:

    • An average transaction banking system implementation can have anywhere between 6 to 10 vendor code drops depending on the size, complexity, quality and stability.

    • For each vendor code drop, internal development teams will usually match with a code drop of their own.

    • Customers usually plan data migration from one system to another in waves. It’s only in the first wave that you realize there are mismatches and you need to go back to the drawing board.

    • Most product vendors bring a set standard of quality and stability of their applications to the table. Banks tend to over engineer with too many customizations, underestimate the complexity of their programs and are not always ready for projects of this size and complexity.

    • A typical transaction banking system implementation will have a minimum of 3 full rounds of regression testing across the implementation. Smaller selective rounds are also run.

    • Single Points of Failure and managing scope in these projects are some of the primary causes for many cost and schedule overruns.

    • Quality and Stability of these applications are most vulnerable in customizations, integrations, environment and data variables.

    • A typical QA program for implementation of a transaction banking system lasts 18 months and costs approximately US $3 to US $4 Million. This is total cost of ownership that includes all activities such as user migration, data migration, test management etc.

    The cost of finding ‘Go-Live’ impacting issues later in cycles not only increases the cost of fixing them but also could seriously derail your entire implementation program. The same goes for critical missed issues prior to ‘Go-Live’. One major issue could cause you to lose revenue or reputation, get pulled up by the regulator or just result in a really bad user experience.

    A possible solution is an early detection, prioritization and resolution of issues that reduce your Total Cost of Implementation and subsequent release projects. This would involve establishing ‘Go-Live’ criteria upfront and using a combination of analytics, domain experience and application knowledge to carry out an objective assessment of the implementation to determine its readiness to ‘Go-Live’.

    July 1, 2016

If you are looking for someone who can help you accelerate your time to market on product releases look no further. Get in touch with us today to explore our scientific and analytical reports derived from our proprietary technology!

Our Blog
Category

  • Traditional defect severity and priority is dead! Long live “Go-Live” impact!

    All implementations and releases will Go-Live with defects! That sounds ominous, doesn’t it? For decades software product firms have graded defects / bugs / issues in their applications by severity levels defined by levels of impact to the software product. Just Google defect severity and one finds a multitude of explanations, definitions and meanings. One thing everyone agrees on is that this is a point of conflict between business, QA / testing and software development. Here is what we found on one of those Googling expeditions on defect severity. Having done this in our past avatar for years we don’t deny that this situation is real….and sometimes has even lead to fist fights!

    “Defect Severity is one of the most common causes of feuds between Testers and Developers. A typical situation is where a Tester classifies the Severity of Defect as Critical or Major but the Developer refuses to accept that: He/she believes that the defect is of Minor or Trivial severity. Though we have provided you some guidelines in this article on how to interpret each level of severity, this is still a very subjective matter and chances are high that one will not agree with the definition of the other. You can however lessen the chances of differing opinions in your project by discussing (and documenting, if necessary) what each level of severity means and by agreeing to at least some standards (substantiating with examples, if necessary.)

    ADVICE: Go easy on this touchy defect dimension and good luck!”

    Source: http://softwaretestingfundamentals.com

    The biggest issue with defect severity is that it is usually seen as assigning blame for something not working. Here is what traditional defect severity levels look like:

    1. Critical/Showstopper - An issue in any software that affects critical functionality or feature. This is a state of complete failure of that functionality or feature and can at times include data issues too. These issues typically do not have workarounds and need to be addressed immediately

    2. Major/High - These are issues that are lesser in severity than Severity Level 1 but still affect important functionality. These issues might have complicated workarounds but more often than not need to be addressed before Go-Live

    3. Minor/Medium - These are issues that are lesser in severity than the Severity Level 1 and 2 and may or may not impact important functionality. These issues might have simple workarounds and more often than not do not need to be addressed before Go-Live

    4. Trivial/Low - These are issues that are lowest in severity and do not impact important functionality. These issues have simple workarounds and definitely do not need to be addressed before Go-Live

    Don’t get us wrong, these severity levels are important from a software development perspective. In fact it would be hard to find anyone who disagrees on severity 1 and severity 4 issues. It is severity 2 and 3 defects that cause the most heartburn. Add the pressure of an aggressive project timeline and the thin line between prioritization and defect severity categorization blurs.

    At Go-Live Faster, we extensively work with banks helping them accelerate product launches and releases by making their technology implementations predictable. A typical bank implementation consists of multiple code drops over a 6-12 month period with customization and integrations being delivered in trenches. QA / testing takes place in an incremental manner and defect fixes are delivered in cycles. Some other common characteristics of these implementations are that they are always behind schedule and over-budget.

    Given this situation, these implementations are prime for a constant change in priority, conflict when assigning severity to defects and a last minute scramble to determine "what defects we can live with" and "what defects have to be fixed". We found the traditional severity levels to fall short for this unique situation faced at banks. While creating Go-Live Faster’s suite of Readiness solutions, our teams came up with a simple yet effective way to prioritize Go-Live impacting issues up-front thus leading to better prioritization and consensus across teams. At Go-Live Faster we call it the Go-Live criteria. The Go-Live criteria is broken into four main areas that impact any implementation; i.e.,

    1. High Go-Live Impact - Revenue and regulation impacting

    2. Medium Go-Live Impact - Indirect revenue impacting and reputation impacting

    3. Low Go-Live Impact - Impacts Go-Live but has workarounds or can be fixed once you Go-Live

    4. No Go-Live Impact - Trivial/ low impact defects that need not be fixed even after you Go-Live

    Further, to make it more quantitative, Go-Live Faster provides a Go-Live Score that indicates the overall readiness to Go-Live with an implementation. This table above helps achieve a consensus between LOB, IT and vendor teams. Typically driven by LOB teams, they define what it means to them to have each one of these issues in the system. Once frozen (typically at the start of the project), the IT and vendor teams tag defects based on the Go-Live criteria. This is done in addition to the traditional severity tagging and acts as a substitute for non-scientific prioritization like High, Medium and Low.

    One additional improvement that one can make to the above table is to add user experience impact to the high and medium categories instead of just reputation. However this requires extremely high levels of maturity in a project team as user experience impact is very subjective and means different things to different people.

    Get in touch with us today to understand how to come up with a Go-Live criteria for your next implementation.

    November 17, 2016
  • Top PMO Risks that may Derail your Treasury Management System Implementation

    When it comes to implementing a new treasury management system, banks can be like deer caught in the headlights. There are several unknowns that come into play: Which technology should be used? Will the system align with business priorities? Will the system be implemented within the defined timeline and costs? Will it be able to keep up with new competitors and adapt to new regulations? PMO teams play a significant role in addressing these questions and are pivotal to the success of treasury management system implementations. Although, completely failed implementations are rare, majority of implementations do encounter cost overruns and substantial delays. Post implementation, most banks also realize that there is a gap between the intrinsic value of the treasury technology they chose and their ability to reap benefits by getting it to work effectively.

    Over the years of helping banks and their PMO teams implement treasury management systems, we have realized that project management has significant influence on how an implementation project shapes up. We have outlined a comprehensive list of PMO risks that will help PMO teams pinpoint where their treasury management system implementation may go wrong:

    • Lack of right metrics to control the program

    • Support and availability of integrating teams such as Operations, Host and ARP

    • Less involvement from business teams right from the start

    • Gaps in requirement documentation and interface requirement documentation

    • Schedule slippage

    • Unplanned environment downtime

    • Software Configuration (and Release) Management issues

    • Delay in signing off on requirements / customizations by all stakeholders

    • Defect backlog

    • Business continuity planning and disaster recovery issues

    • Gaps in scheduling of interface development

    • No thought given to knowledge management

    • Lack of clear roles and responsibilities and single points of contact

    • Gaps in communication and reporting protocol

    • Coordination amongst various vendors and SLA management

    • Poor prioritization of customizations, code drops and interface development

    • Poor estimation

    • Scope creep

    • Absence of risk management practices

    • Ineffective quality assurance and quality management

    An experienced PMO team needs to be aware of such PMO risks and should be able to devise mitigation strategies before budget and time creep threatens to derail the project. Product Management and IT teams need to support PMO teams in this. There is a tremendous upside to avoiding these risks. Not only will you be able to ensure that the implementation project meets time and cost goals, you will also be able to rapidly start realizing ROI from the treasury management system. Can you think of any other PMO risks that you may have faced during an implementation?

    August 8, 2016
  • Risks Faced in Transaction Banking Implementations

    Banks have never been under more pressure than now to release more features faster on their transaction banking systems. Faced with competition from peers and all forms of third party processors, the pressure on fee income is being felt across banks. Build vs. buy…to customize or not to…outsource, co-source or in-source…quality vs. stability; this list can go on. The bottom-line is that these are turbulent times for the technology initiatives of most banks.

    A typical implementation of complex transaction banking solution is painful. The average program lasts 18-24 months, takes up majority of the technology program resources and costs millions of dollars. Even with all of this money and time, most banks ‘Go-Live’ with quality, stability, integration and user experience issues. Just QA spends can be in excess of US $3 million+ with at least 3 iterations to the original budget. All of this is not due to a specific product, but because of the nature of the beast. It is evident across products, projects and banks. Here are some implementation risks that we have observed and validated with customers:

    • An average transaction banking system implementation can have anywhere between 6 to 10 vendor code drops depending on the size, complexity, quality and stability.

    • For each vendor code drop, internal development teams will usually match with a code drop of their own.

    • Customers usually plan data migration from one system to another in waves. It’s only in the first wave that you realize there are mismatches and you need to go back to the drawing board.

    • Most product vendors bring a set standard of quality and stability of their applications to the table. Banks tend to over engineer with too many customizations, underestimate the complexity of their programs and are not always ready for projects of this size and complexity.

    • A typical transaction banking system implementation will have a minimum of 3 full rounds of regression testing across the implementation. Smaller selective rounds are also run.

    • Single Points of Failure and managing scope in these projects are some of the primary causes for many cost and schedule overruns.

    • Quality and Stability of these applications are most vulnerable in customizations, integrations, environment and data variables.

    • A typical QA program for implementation of a transaction banking system lasts 18 months and costs approximately US $3 to US $4 Million. This is total cost of ownership that includes all activities such as user migration, data migration, test management etc.

    The cost of finding ‘Go-Live’ impacting issues later in cycles not only increases the cost of fixing them but also could seriously derail your entire implementation program. The same goes for critical missed issues prior to ‘Go-Live’. One major issue could cause you to lose revenue or reputation, get pulled up by the regulator or just result in a really bad user experience.

    A possible solution is an early detection, prioritization and resolution of issues that reduce your Total Cost of Implementation and subsequent release projects. This would involve establishing ‘Go-Live’ criteria upfront and using a combination of analytics, domain experience and application knowledge to carry out an objective assessment of the implementation to determine its readiness to ‘Go-Live’.

    July 1, 2016

If you are looking for someone who can help you accelerate your time to market on product releases look no further. Get in touch with us today to explore our scientific and analytical reports derived from our proprietary technology!