Thursday, 20 December 2012

3 Reasons Behind Inadequate Testing of Mobile Apps


 he present century ushered in the era of mobile technology which has opened up many avenues for organizations to grow. This era also brought in various innovations. One such innovation is mobile applications that we now highly depend on for almost everything.  The Strategy Analytics App Ecosystem Opportunities (AEO) forecast - Mobile Apps Revenue Forecast: 2008 – 2017 – predicts that by 2017 the mobile app smartphone market will generate more than $35B growing from less than $1B in 2009.

However, according to a recent study by Capgemini and Sogeti in conjunction with HP, two- thirds of mobile application companies are inadequately testing their apps. It was found that in organizations that conducts quality assurance on mobile apps, 64% of them focused on performance, 46% on functionality while only 18 % on security. "Consistent and reliable software applications have become critical to the operations of many organisations. Yet the lack of confidence in most companies' internal abilities to monitor and test the quality of their software is resounding, particularly when it comes to mobile applications," said Michel de Meijer, Global Service Line Testing Lead, Capgemini Group.

Jennifer Lent on searchsoftwarequality.com highlighted some of the reasons why many software testing organizations are not giving mobile app testing the priority it deserves. Take a look at what she has to say.

Testing organizations are not serious about mobile apps

Many of the testing organizations look at mobile apps as "mini smartphone apps".  Steve Woodward from Cloud Perspectives said that many testing organizations have a mentality that the applications should work and it wouldn't be a big problem if there are defects in the application. Testing organizations shouldn't have this type of mentality as many of the applications that are flooding the market today are designed keeping in mind the various objectives of a business organization.

Evaluating the app performance in various environment
Mobile applications are expected to perform in the same way across different real world conditions. Testing mobile applications is restricted only to the labs and it is impossible to replicate the various conditions mobile application users will experience in the real world informed Matt Johnston from uTest.

A Low Price Tag
With the number of mobile applications increasing day by day, there is a cut throat completion among many developers to deliver an app with a lower price tag before the deadline for the release. Thus, to bring down the cost, many may restrict testing only to the performance and functionality aspects and ignoring the need to security test their apps.

5 Tips to Choose the Best Automated Mobile Testing Tool


 Mobile applications have revolutionized our lives. Today, with the touch of an option, we can access any amount of information such as getting directions to a restaurant, booking our flight tickets or even order a cab service. The shift from the more traditional desktops to smartphones has resulted in an increase in the demand for mobile applications.

However; the number of smartphones having different features and specifications are increasing daily and this poses many challenges to the testers. Brian MacKenzie in a blog post on northwaysolutions.com highlighted some tips to help mobile app testers choose the best automation tools to test applications.  

1) Reusable Scripts
Testers should ensure that the test scripts can be reused across the various devices running the same operating systems irrespective of the version. There are various solutions in the market today that claim to meet this requirement, however the reality is many of them don't although they come with tag lines such as "object based scripting" and "cross OS scripting".   

2) Support Emulators/Simulators and other Physical Devices
Testers should be able to run and record scripts across physical devices as well as simulators and emulators.

3) Web Application Support
Hybrid apps, web based apps and native apps have their own advantages and disadvantages not only to the developer but also to the testers and users. Hybrid and Web Based apps are becoming popular and they are likely to replace native apps. Before selecting the tool, the tester should ensure that it support all types of apps.
 
4) Interruptions Shouldn't Cause the Test to Fail
Common interruptions such as phone calls and messages shouldn't cause the test to fail. Testers should ensure that the tool doesn't get affected by the various interruptions and it should be able to resume once the interruption is over.

5) Integration with Performance Testing Tools
As poor application performance can affect the revenue, testers should ensure that the solution they plan to select can be integrated with other performance testing tool. Furthermore, the solution should be able to measure the performance of the RAM, disk, the battery and the CPU. 

5 Skills Every Tester Should Have


The ever increasing complexity of applications and the need of software applications in various business organizations have changed the face of software testing. The users expect their applications to be not only be user friendly but also defect free and this increased the responsibility of a tester. Testers are now viewed not only as a person who is responsible to find any bugs and defects in software but they are now viewed as a person who can instill some confidence into the minds of the users.

Milind Limaye on beyondtesting highlighted some of the skills that every tester is expected to possess.

1.    Communication
Testers are not only expected to be good listeners; however they are also expected to be good presenters as well. Testers need to communicate with the management, the users and the developers before, during and after development, prepare test cases, test logs and present the test reports. Communication skills of a tester include his/her body language, the tone, their writing style as well as the words they use.

2.    Domain Knowledge
Although testers are not expected to be domain experts; however they are expected to have a brief understanding about the application. This will help them identify the possible defects a user might face. According to Millind, the tester should keep the domain in mind when deciding on the priority of the bugs and defects, the test cases and the priority of the requirements. They should also be aware of the various domain complexities and the challenges.

3.    Desire to Learn
Testers are expected to keep themselves up to date with the various technologies, approaches, tools and techniques and apply them during testing. Testers should always remember that new tools may offer then some new and exciting features which can enhance their testing capabilities.

4.    Differentiate the Defects
Testers should have the ability to identify and differentiate the defects which need immediate attention and those that are severe. The test plan should include the various levels with regards to the priorities and severities of the bugs.

5.    Planning
Testers must be able to plan the testing process accordingly. The test plan should include the priorities of the various test cases, the number of defects that they are targeting, as well as all the functionalities, requirements and features.  A well planned test will lead to a high customer satisfaction.

Tuesday, 4 December 2012

CATT stands 4 Computer Aided Testing Tool

Although CATT is meant for as a testing tools, many SAP users have now use CATT frequently to upload vendors master data and make changes to other master record.

SAP Consultant and Abapers tends to used it for creating test data.

With CATT, you don't have to create any ABAP upload programs and this save on development time.  However, you still have to spend time on data mapping into the spreadsheet format.

The transactions run without user interaction. You can check system mesages and test database changes. All tests are logged.

What CATT does is record you performing the actual transaction once.

You then identify the fields that you wish to change in that view.

Then export this data to a spreadsheet to populate with the data required.

This is uploaded and executed saving you keying in the data manually.

To perform CATT, it has to be enabled in your production environment (your system administrator should be able to do this - SCC4).

You will also need access to your development system to create the CATT script.

SAP R/3 Testing Tool:

SAP R/3 Testing Tool:
SAP R/3 comes with an internal recording tool known as CATT (eCATT). One of the advantages of CATT (eCATT) is that since it is part of the standard SAP system it’s free of charge. However CATT does have some limitations which impel many companies to procure other test tools. 

There are many vendors offering commercial automated test tools and test management tools for testing SAP. Companies purchasing automated test tools expect and erroneously believe that the test tools will be the panacea to their entire SAP recording and testing needs. Unfortunately, this is not the case, since no two SAP implementations are exactly the same across two or more companies or at times even within different divisions of the same company. Consequently, a company implementing SAP might need to procure test tools from more than one vendor in addition to the CATT (eCATT) tool. 

An SAP implementation could be implementing SAP add-ons such as BW (Business Warehouse), APO (Advanced Planning Optimization), SEM (Strategic Enterprise Management) or even modules such as PS (Project Systems) that generate graphs and charts that a recording tool does not recognize. Furthermore, a company may move its SAP GUI from the desktop (fat client) to running SAP as web-based (thin client) or through an emulated Citrix session which could render the existing test tools useless. 

Companies that wish to move to an automated testing strategy should articulate and document what SAP modules and SAP add-ons they areinstalling in addition to any legacy applications integrating with SAP. This information should be provided to the vendors of automated test tools in order to determine what can actually be recorded and tested with the test tools. The company should further investigate with the vendor what additional benefits over CATT (eCATT) the automated test tools provide. The objective is to get the test tools that will maximize SAP recording.