搜档网
当前位置:搜档网 › Applause_Whitepaper_The_Key_Steps_of_Comprehensive_App_Testing

Applause_Whitepaper_The_Key_Steps_of_Comprehensive_App_Testing

Applause_Whitepaper_The_Key_Steps_of_Comprehensive_App_Testing
Applause_Whitepaper_The_Key_Steps_of_Comprehensive_App_Testing

The Key Steps of Comprehensive App Testing

W H I T E P A P E R

It’s Not About Automated Testing Vs. Manual Testing.

To Deliver a Winning App, You Need Both … and More.

ntroduction

The sotware lifecycle has changed dramatically in recent years. We no longer live in a develop-test-release-move on world. Users expect higher quality applications and sotware, and they expect them quickly. To meet this demand, many companies are experimenting with diferent approaches, from continuous integration to Agile to dog fooding to test driven development. What all these approaches really boil down to is the simple fact that quality is no longer confined to the QA department. It is now also an engineering problem, a marketing problem, a product problem. It’s a CEO problem.

To meet these increased expectations, the QA lifecycle has changed. A successful company no longer chooses one testing tool, relies solely on automated testing or pushes all the work to testing teams around the world – they now combine all these approaches.

In today’s world, a successful QA approach that will lead you to 360° app quality? encompasses automated testing, in-the-wild testing and post-launch analytics. If a company wants to succeed, they need to recognize the benefits and shortcomings of all these testing approaches and realize that they work much more eficiently when combined.

Here’s why…

utomated Testing

Automated testing is a powerful, timesaving tool that is a team’s first line of defense.

Odds are more than one developer is working on a project. That means one developer’s code can inadvertently break another’s. Automated test scripts that run quickly and efortlessly whenever a new build is added help teams ensure they are moving in the right direction and not damaging previous work. Automation is also useful for documenting assumptions about how portions of an application should behave. These assumptions and guarantees can be verified with the click of a single button, giving developers confidence that their code meets expectations.

Automation is also invaluable when testing complicated or time intensive functions that are dificult to test and verify “by hand.” Why waste a manual tester’s time (and rely on humans who can easily make a mistake) checking that an app is performing calculations correctly? A computer program will move much quicker and provide more accurate results in data-driven or

complex situations.

The key is to use automated testing wisely, recognizing where it excels as well as its limits. If you don’t know there’s a problem, how can you write a script to check for it? An automated test can tell you if a link works and what level of load an app can stand. It can’t tell you if that link is hard to click with a human finger or if a video gets choppy under load. GUI issues are hard to catch with automated testing.

If your app is in production, it’s better to test mission critical, potentially destructive or customer facing features (like password resets, account

deletions and email notifications) in a controlled, manual fashion. There is simply too much damage potential to test live features like that at the push of a button – if something goes wrong, a user’s account can become inaccessible or you can end up spamming your users.

Moreover, automation testing takes place in a confined, lab-based environment far removed from the real life situations your applications are going to encounter. This alone should be enough to convince you that automated testing on its own isn’t enough. Your users aren’t using your app in a single location with the most up-to-date hardware and sotware on a perfect connection with no other apps running and no interruptions. That’s why manual testing – particularly testing where your users live, work and play – is an important component of a comprehensive testing strategy.

n-The-Wild Testing

Manual testing is the next step on the path to comprehensive sotware testing. Real testers looking at an application will catch things that automated testing

never will.

The most efective manual testing approach is to move a portion of your testing out of the lab and into the wild. Your app isn’t meant to be consumed in a lab, so why should your testing be confined there? An app that is only tested in a lab environment won’t stand up to real world quality expectations.

Testing in-the-wild allows you to test with real users, on real devices, running diferent O S versions, spread around the world on diferent networks and connections. In short, it’s testing under real world conditions.

With the size of today’s testing matrix, it’s nearly impossible to cover every hardware/sotware combination in-house. Even if you did have the budget and space to amass hundreds of mobile devices and dozens of web browsers, you’d still be missing that real world component. Using cloud-based devices or emulators and simulators run into the same issue.

The truth is, there are some bugs you simply won’t encounter in a lab enivronment. Some will only occur when a device has poor connectivity, some will only occur in certain locations, some issues might arise based on the OS version a user is running or the specific device they’re using, others might be triggered by other sotware running concurrently. Despite never showing up in your lab testing, these bugs will be caught by your users if they make it into production.

In-the-wild testing gives you a unquie chance to make your users as happy as possible as soon as your application launches. Identify your target audience and the hardware and sotware they’re most likley to use, where they’re likely to use it from and what they’re likely to do with your app. Then enlist professional testers who mimic that criteria and put your app through its paces in the real

world. You may also be able to take the results and create new automated test scripts to catch similar bugs eariler in the testing proccess going forward. (Remember, even as you build up your automated testing library, you’ll still need to test manually in-the-wild.)

Using testers who mimic your target audience has the added benefit of showing you how your users will really interact with your application. Your in-house teams are close to the app and know what it’s supposed to do and how it’s supposed to work. Your end users won’t have this kind of in-depth knowledge and may approach the app in a way you didn’t anticipate. Testing

in-the-wild will bring these potential issues to light before you dissappoint your real users.

Real world testing will help your app get ready for real world use – where you’ll encounter the next phase of comphrensive testing.

ost-Launch Analytics

Your job isn’t done just because your application launched. Paying attention to user sentiment, actions, feedback and other metrics is part of a complete 360° app quality? approach. Without knowing what your users actually think, you can’t improve your app.

The rise of app stores, online ratings, reviews and social media have forever thrust post-launch analytics into the world of application development and quality assurance. Where an unhappy customer used to tell a handful of friends and maybe call your help line, they can now tell thousands of followers and even more strangers about their problems or disappointment. Headlines that

say your app crashes constantly, leaks user data, is buggy, introduces issues to an app people used to love, isn’t compatiable with popular hardware or has a major security breach will attract readers and sap your user pool. Users listen to what people are saying, and more oten than not, they have other options to replace your lack luster app with.

While this might seem terrifying, it can also be extremely useful. Users will tell you what needs to be better, what they love and where they encounter problems – listen to them. If an issue makes it past your testing, react proactively. Pinpoint the problem and correct it quickly, but be careful not to introduce another bug in your haste.

Spend time analyzing the data for trends, focusing on the metrics that matter and weeding out useless information or subjective feedback. Knowing the number of users is nice, but it doesn’t tell you anything about the app’s quality. Realzing that the biggest user drop-of point is at login tells you something.

A user not liking your color scheme isn’t important feedback, but hearing that the text is hard to read against the background is something you should look into. Time spent on the app is interesting, but are users going to the most important pages and completing the tasks you want them to? Don’t get bogged down in flashy but ultimately useless data. Pay attention to the metrics that can help you going forward.

Post-launch analytics are your road map to making your app better. If you don’t listen to what your real users are saying (both directly through feedback and implicitly through their actions), you won’t have real users for long.

onclusion

App quality is no longer a phase of the SDLC, it’s now all encompassing, spreads across departments and is a piece of the larger app lifecycle. Teams should be considering testing and app quality while desiging and wireframing new projects. Developers should be testing their own code and striving to make their apps as bug free as possible. Automated testing should be employed early and manual in-the-wild testing should pick up naturally where automation leaves of to create a seamless, complete testing approach. When an app is in the hands of real users, companies should pay attention to what those users are saying and doing and that data should inform their actions going forward.

Understanding that QA isn’t isolated and that testing isn’t an “Approach A or Approach B” venture will help you produce the highest quality app possible. Don’t get so entrenched in your ways that you’re not willing to try something new, and don’t write of a testing approach because it didn’t work once. The best testers will tell you that the practice is fluid and needs to be influenced by the product your working on, the team you’re working with and the users you’re working for. Hold quality as your end goal and do everything in your power to work toward it.

About Applause

Applause i s l eading t he a pp q uality r evolution b y e nabling c ompanies t o d eliver digital experiences that win - from web to mobile to wearables and beyond. By combining in-the-wild testing services, sotware tools and analytics, Applause helps companies achieve the 360° app quality? they need to thrive in the modern apps economy. Thousands of companies – including Google, Fox, Amazon, Box, Concur and Runkeeper – choose Applause to launch apps that delight their users. Learn more at .

相关主题