Subscribe to our newsletter
Markswebb's digital customer experience evaluation systems have been around since 2010. One of the outcomes of our research, summarizing and visually reflecting the competitive landscape, is our ranking. By default, rankings (both locally and globally) are often met with skepticism, with concerns about adequacy, impartiality, and objectivity. Rather than trying to change anyone’s mind, we’re laying our cards on the table, showing the inner workings of our research instead of the usual discussion of market shifts, individual successes, and challenges. This long read reveals what’s behind Markswebb’s ratings, using one study as an example - online business banking.
Contents
Markswebb’s evaluation system and the study of digital customer experience in online banking are known as the Business Online Banking Rank. It’s one of the agency’s flagship projects, conducted annually since 2013. Last year marked the 10th wave, and the presentation of the 11th wave is scheduled for December 2024.
Subscribe to our newsletter
The Business Online Banking Rank evaluation system is a Markswebb invention, merging the most advanced methods from humanities and technical sciences, such as user research, behavioral economics, human-computer interaction (HCI), cognitive psychology, interface theory, and more.
At the core of the evaluation system is a set of principles:
Markswebb’s unique research approach allows us to dissect a complex concept like digital experience quality down to measurable criteria or “atoms.” Various interaction channels, dozens of user scenarios and their contexts, individual preferences, new technologies, and market experiments are all distilled into hundreds of criteria that describe client capabilities in a digital service and how those capabilities are implemented.
Timofey Barsov, Director of Research and Consulting at Markswebb
In terms of form factor, this takes the shape of a checklist we use for expert assessment. Researchers model customer scenarios and evaluate whether each criterion is met. We gather user feedback through moderated UX tests with the target audience. In-depth interviews reveal hidden user needs when interacting with digital services, helping us understand the product’s usage context, internal expectations, and reasons for specific choices. This basis shapes the blocks of current, market-relevant user tasks.
Each year, the methodology is updated to reflect trends, ensuring that research results not only mirror the market but also “look into the future” to create services relevant for the next year or two.
Then comes the math. Each criterion has a precise weight, as does each block. We multiply the number of met criteria in each block by their weights, resulting in a service score from 0 to 100, reflecting its alignment with the “maximum package” of online business banking for that year. This score represents Markswebb’s view of a benchmark digital service that comprehensively and conveniently meets its target audience’s needs.
The ranking is just the headline. Behind it lies the most important part: a clear growth formula for digital experience and business metrics of a digital service, answering questions like:
The research outcome is a comprehensive report, hundreds of pages long, containing detailed gap analysis in the form of comparative tables, implementation maps, and analytical reviews, as well as a selection of best practices for specific tasks and scenarios.
Curious about how we analyze and enhance business banking services? We offer a wealth of insights. Take a look at our collection of best practices right now and see how our research can help drive growth and improve digital experiences for businesses.
For each Markswebb study, we select a baseline list of participants: those who could be providers of best solutions and those shaping mass user experiences in the market. In the case of online business banking, these are the most popular banks by search interest (impartially reflecting high business interest) and digital leaders from the previous wave’s ranking. The criteria list is tailored to each study and evolves with the market.
Alongside banks we select, any company developing an online bank for small and microbusinesses can participate in the research. This offers a chance to assess their standing compared to top players, identify gaps and advantages, and leverage these insights for strategic decisions. Although this is a paid option, companies that join have no extra advantages over primary participants, aside from the choice to opt out of publishing their rankings.
Julia Morozova, Director of Communications at Markswebb
All participants are evaluated on the same criteria.
The online banking evaluation structure is built around available banking products: the study covers all online banking features, starting from the non-authenticated zone and including all integrated additional services that don’t require separate logins and passwords.
The analysis scope has clear boundaries. For instance, the Business Online Banking Rank doesn’t include adjacent banking services like business registration and account opening, mobile banking apps, or service conditions and rates. These practices and digital services are analyzed in other Markswebb research projects.
To ensure insights meet banks’ needs and align with market trends, the evaluation system is updated with each new wave, adapting to service evolution and our updated market perspective based on project findings from the past year or two.
For example, the 2024 evaluation system is based on over 100 in-depth interviews with entrepreneurs, daily interactions with product teams of major banks, and 10 discovery studies of business client needs and issues.
This approach keeps our research lens sharp. Researchers see services in all their systemic complexity: which features are becoming outdated, how new trends emerge.
Each new research wave considers new situations that entrepreneurs and their businesses might encounter. In the latest wave, for example, a new “Services for Sellers” block and a user task “Participate in tenders/public procurement” were added. The criterion “Work with the block within the mobile bank app interface” was removed from all blocks, as this function is now a baseline and implementations outside the mobile bank are not counted. Outdated criteria related to SWIFT and tracking foreign economic activity events were also removed.
The business online banking study includes both quantitative and qualitative data. Markswebb’s cross-methodology combines User Story and Jobs to Be Done (JTBD) methods, allowing a deeper understanding of how users interact with digital services and what tasks they aim to solve.
User Story — this method describes functional requirements from the user's perspective. Each User Story focuses on a specific use scenario, reflecting user needs and expected outcomes. We use User Stories to create interaction scenarios that show how different user types—from beginners to seasoned entrepreneurs—engage with digital services, which features they use most often, and what challenges they encounter.
Jobs to Be Done (JTBD) — this framework identifies the “job” users hire a product or service to do. Unlike User Story, which focuses on user actions, JTBD centers on broader motives and usage context. We conduct JTBD interviews to uncover users’ internal intentions and needs, understand which tasks they want to solve with the product, and what drives them to choose specific solutions. For example, we might find that entrepreneurs use online banking not only for transactions but also to minimize administrative costs or make quick management decisions.
Our market perspective:
Such extensive competitive analysis would be difficult to fit into an in-house product research team.
Our entire work process is painstakingly manual. The object of our research is the bank, but the subject is the human user experience. Markswebb’s researchers dedicate months to analysis, avoiding shortcuts with quick, batch data processing.
Ivan Grigorkin, Group Head at Markswebb
Research stages:
The checklist is an Excel table, approximately 1000 rows long, with criteria reflecting user requirements for a “benchmark” online business bank. The checklist is pre-prepared and ready at the start of the study. Criteria are grouped by popular user tasks like login, software setup, document and payment handling, etc.
Each user task is described by a set of criteria, which may reflect the ability to complete the task, task completion speed, information/service navigation availability, informational support, availability of an English version, and other factors.
The criteria are grouped for convenience. Individual criteria are organized into subtasks, subtasks into user tasks, and tasks into task blocks.
Here’s an example:
In the 2023 Business Online Banking Rank, for instance, there were 140 scenarios and 780+ criteria reflecting functional completeness and service convenience.
While filling out the checklist, we’re essentially comparing each bank’s product to a benchmark. The comparison involves checking each criterion’s implementation and filling out the checklist for each product under study.
Criteria are rated as follows:
Additionally, researchers add comments and a screenshot link for each functionality.
After completing the checklist, we begin rating calculations.
The rating is a visual summary of a complex comparison. It helps “compactly” show how the market has shifted since the previous wave, identifying which banks currently offer the best services. Our study generates about a dozen ratings:
This approach sums up the competitive landscape and offers a detailed view of each service and scenario block related to specific products. Depending on the bank team’s competitive position and goals, this can help build a comprehensive solution pathway, as each figure represents the synergy of numerous metrics, factors, and calculated connections. The team can follow this path independently or with Markswebb’s support as part of a consulting project.
An example: we consider the "Get advice" criteria block, which includes 15 indicators for evaluating bank chat support. Here, it’s essential to account for both the availability of advice and response speed.
Calculation example:
Advice availability:
Response speed:
Example evaluation:
This method accurately assesses customer service quality and identifies weak points affecting the bank’s overall score.
How is the final score calculated?
Each checklist criterion has a weight determined by four parameters: criticality, coverage, frequency, and uniqueness. These parameters are rated on a 10-point scale:
Block weight = sum of all criteria within it.
Final score = number of met criteria in the block multiplied by their weights.
The study yields a vast array of numbers and tables, which Markswebb’s researchers interpret into analytical insights to guide decisions. We identify significant market changes, correlations, and trends, as well as best practices, reapplying which can quickly enhance user experience quality, avoid common pitfalls, and reduce time to market. Specifically:
We thoroughly analyze the competitive landscape: market atmosphere, what sets leaders apart from laggards, commonalities and differences within clusters, task implementation details, and what leaders still lack to reach the benchmark.
Ultimately, the Business Online Banking Rank is a tool for understanding the underlying processes shaping digital customer experience in online business banking. We unveil complex connections between user needs, service capabilities, and new market trends, empowering product teams to consciously drive change and create products that meet modern entrepreneurs’ challenges and benefit the bank’s business.
We’d love to hear your questions and feedback in the comments - just contact us in any convenient way!
We respond to all messages as soon as possible.
We’ve evolved dozens of successful financial services and are eager to prove that our expertise can be implemented in other industries and around the world. Have a look at our success stories!