Wednesday 26 April 2017

Test Automation Framework with Selenium WebDriver

selenium-webdriver-frameworks
This blog post on creating and using a test automation framework is expected to help anyone with some know-how on Java and Selenium get started with the ever popular “Page Object Model” based test automation framework.

Why Test Automation?

Test automation, as you know is to help speed up testing when it becomes monotonous and time consuming. Automation can help when we test something repeatedly. Humans are prone to making mistakes while doing monotonous actions.

Automation gets rid of monotony and repeated tasks while doing the following:

  • Test the same tests with different sets of data
  • Test with same set of data, on every new build, manually
  • Test, find bugs, fix, regress, find bugs, fix, regress – cycle of bug fixing

Benefits of Test Automation The benefits are:

  • It reduces the time to do tests, thus saving manual effort
  • It runs tests parallelly, thus reducing manual effort for testing
  • It runs tests with more than one browser, thus saving manual effort
  • It runs more tests with different sets of data, thus getting better coverage

What is a Framework?

A framework is just a set of rules and structures, which makes it easy to get a suite of tests up and testing in minimal time. For example, assign test data in a specific folder, store configuration settings in a specific file and folder, name the tests in this template, create packages, etc.

Generally a test automation framework is built for a specific application. But frameworks can be generic as well. The examples for generic framework are:

  • Linear
  • Modular
  • Data Driven
  • Hybrid – which is a combination of all three above
A test automation framework simply helps you in developing a suite of tests, connecting with the application under test, running the tests against the “application-under-test” in required order and extracting test execution results.

What is the Page Object Model?

The Page Object Model is a design pattern of testing, derived from the Object Oriented Programming concepts. The POM describes the web application into the number of web pages being used, and contains the elements as properties, and actions as methods. This offers you low maintenance on the tests developed.

For example, for a generic log in page, you would want to,

  •  “set user name to a String value being passed from the test”,
  •  “set password to a String value passed from the test”,
  •  and “click on submit”
and the above two tasks are completed. To continue we would need to “validate whether home page is loaded with the set user name”.

Creating the test automation framework

We will now attempt to guide the user through the creation and designing a simple test automation framework for running tests on web applications, using Selenium webdriver, TestNG and Maven, using Java programming language. This framework should help you run the tests from an Eclipse IDE or an IntelliJ IDE.

Tools used are:

  •  Apache Maven – Defines the project structure
  •  Selenium Webdriver – Test automation tool/library
  •  Selenium Grid – A feature to run tests on multiple test environments on a network
  •  TestNG – Test runner and report engine
  •  Java – Language that connects all components
The test suite structure is as given below.
test suite

BasePage

BasePage is a class which contains the reusable methods that will be used by the various pages. For e.g., the BasePage class contains “getWebElement” method which will return a web-element when you pass the locator and the driver instances.

Pages

Pages are a set of classes, named based on the name of the webpage. For e.g., Login-Page, Home-Page, are so named because of the webpage they represent. The Login-Page will contain the public methods to perform login, and private attributes for username and password fields. The private attributes for username and password fields will help maintain encapsulation of the unwanted information for the tests.

BaseTest

BaseTest is a class which contains the reusable methods that will be used by the various tests. The BaseTest is expected to contain the “@BeforeTest”, “@AfterTest”, “@BeforeClass”, “@AfterClass” annotated methods to help manage the test sequence. The BaseTest will also contain the reference to the public web driver variable which will be instantiated and initialized in the @BeforeTest or @BeforeClass method when the test initiates and continue as a static variable till the @AfterTest or @AfterClass methods is called to end the test.

Tests

Tests are a set of classes, named based on the test to be conducted for each webpage. For e.g., LoginTest, is so named because of the test method that would be used to test the Login page/action. The LoginTest will contain the public methods annotated with “@Test” to test the login functionality. The LoginTest will instantiate the LoginPage class, using the PageFactory.initElements method, and then will invoke the LoginPage log in method with data parameters for username and password which will set the values in the username and password fields in the web application, and click on the submit button. Validations need to be inserted post the submit button click so that the login can be validated. Boolean values are returned from the Login-Page, and assertions are performed at the test level for validations.

Utilities

Utilities contain the various reusable classes and methods for overall run of the test suite. An example of a utility class could be a TestListener implementation to capture screenshots in case of failures.

Project Structure of the Test Automation Framework

The project structure in Eclipse or IntelliJ could be as follows:
automation frame work

Steps to create the framework in Eclipse

Let us see how to create a new framework in Eclipse
  • Create a new Maven Project
  • Provide a suitable package name for your project :: com… is a good naming convention you can follow
  • The folder structure would look like this: src/main/java, src/main/resources, src/test/java and src/test/resources
  • All our code would code into two parts.
  • Test Objects and Data Providers would go into src/test/java :: Meaning all testng related code should go into src/test/java
  • Page Objects should go into /src/main/java
  •  You would have named your package when creating the maven package structure
  • Within the src/main/java and src/test/java you would see your defined package name and a sample java file will be found. Remove these files.
  • your project structure should look something like below.
  • Under src/main/java you can have com….pages
  • Under src/test/java you can have com….tests and com….datastore
  •  You can have sub packages within these main packages to identify with modules
A big disclaimer here: –  This is only a suggested approach. You can choose from any different approaches out there.
Based on the choice of browsers you can use the TestNG structure to set up your base test.

TestNG setup

You can give your browsername, version and os platform in the testng file to distinguish between classes, tests or suite. Since I am going to use tests for my differentiation of browsers, I would use the following structure for testng.xml
TestNG

PageObjects

For the page objects, you can have differences in how you identify and interact with elements. This is a sample code i have been using
public class LoginPage {
By txtUserName = By.id(“uid”);
By txtPassword = By.id(“password”);
By pageTitle =By.xpath(“html_title”);
By btnLogin = By.id(“btnLogin”);

TestObjects

For the tests, we would be using @Test annotations to define the tests.
@Test
public void TestClass1() {
// Test Code here
}
For the base test, you would need a @BeforeTest method which defines the browser and driver to be used. This set up will be based on the parameters passed, or any initial configuration you have set up. From my code above, in the TestNG.xml, you see, i have parameters defined for every test. Generally these have to be very similar. The parameter in the test2 is superflous (not recommended)
@BeforeTest
public void beforeTest(String parameter1) {
// Here you define how you can set the driver
WebDriver driver = null;
If (parameter == “chrome”)
driver = new ChromeDriver();

else if (parameter == “firefox”)
driver = new FirefoxDriver();

}
I would like to stop here. This is something that gives one enough ammunition to start. More to come in the subsequent posts.
Don’t forget to leave your comments on improving this blog/post.
Thank you.

Sunday 23 April 2017

Test Drive @ Agile Highway


Agile Development” is an umbrella term for several iterative and incremental software development methodologies. One of the great things about agile is that, it’s an approach that travels alongside our mind, rather than a set of uncompromisingly rigid rules that encourages us to modify the implementation as we practice.
I know what it’s like to be looking on at a practice/methodology like agile. It’s an established practice now, which clearly works for the software development teams. When you see that working, and you see how highly people value it, you obviously want to make use of it yourself.
But, I’ve got two peeves in particular, where I think many people misinterpret and mangle the spirit of what agile is all about. We’re trying to prevent this at Indium, and we’d love to hear from others who have ideas as to how to improve this even further!

Peeve #1: 100% Test Coverage & Efficiency!!! No Defects Leaked


Mostly, the mocks above are absolutely true. Yes, changes happen so adhoc in real time agile, that meeting 100% test coverage is a little difficult to achieve as there are time crunches to the QA team;
  • For the build to be available
  • To document & execute test cases
  • To measure test coverage / efficiency
So how do we handle this efficiently?
Well! We have a solution for such demanding situations, as we have seasoned ourselves with various permutations and combinations of approaches.
Majority of them assume that, application is tested thoroughly as there are only smaller chunks of work done in a Sprint and it saves a lot of time. The key things that people really miss out is how much of testing is actually done and how early are the testers in agile projects get involved.
We strongly recommend the shift left technique. The more early the testers get involved, the more efficient the application would be. The quality of the development enhances when the testers are involved early, especially in Agile Projects.
The Test Driven development strategy makes the tester to document the test scenarios for each requirement in the user stories within the first couple of days of the Sprint plan. The scenarios include positive, negative and corner cases. These are reviewed and approved by the business and passed on to the development team who in turn would validate their code against these scenarios. This saves time of rework on coding and also provides sufficient time for the testers to quickly assess the feature against the requirements.
The testers work alongside the development team in their boxes or on any environment that is available. They run through the tests any number of times to verify and validate if it is built in the right way.
The approach recommended doesn’t guarantee Zero defects as the testers do not really have the time to validate the integrated piece but assess only at the unit level and helps in quantifying the completeness of the feature against its requirement. It is 100% tested for its own requirement as a unit and cleared.

Peeve #2: Overcoming Regression Rue

According to the agile manifesto
“Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.”
We had this issue when we first introduced the concept of Sprints in our team. The key objective of introducing Agile was with the intent to avoid burn outs of our engineers, by restricting scope and workload to manageable levels, so that, we consistently build and design better software. The team started off with full excitement and enthusiasm for the first couple of Sprints and were completely exhausted by the end of the 2nd Sprint, due to the intensity, and we realized that this is not a workable model
For us, it comes down to finding sustainable pace, always. To balance the pace, we adopt a strategy, which paves way for enhancing quality and also meeting the original intent of keeping our engineers happy.
As elaborated in our approach earlier of having the tester tagged with the development team, the key component to address the regression rue’s is to have a different pool of tester(s) who would pick up the unit level test scenarios documented identify the integration pieces, impact areas and create a regression test suite. This team will always be a Sprint behind but how it value ads is by conducting a thorough line by line vetting of test cases and reporting critical integration / impact issues without the pressure of time factor.
The best aspect of this model is that, it provides flexibility and allows testers to identify gaps that are left uncovered (if any) by the other team. It also provides sufficient time to have maximum coverage of test cases and uncover all the defects within the Sprint duration. The defects logged by this team are resolved by a dedicated team of developers who then integrate these fixes along with user stories developed in the next Sprint and the QA regression continues to streamline the application across multiple iterations.
The two teams converge towards the end of the final Sprint and work alongside and ensures that the team meets up the acceptance criteria defined in the plan and signs off in style, driving away safely & securely in the Agile Highway.
Has anyone else found better ways to sleeve off the peeves, post it out in our blog page

Cloud Testing to Mitigate Risks


Tech leaders are continuously adopting cloud IT strategies to leverage the profitable advantages that include speed, agility, scalability, accessibility, flexibility and innovation. Implementation of cloud computing is more and more being considered by organizations as the most suitable and promising choice in the recent years.
According to a Gartner, Worldwide Public Cloud Services Market is expected to reach $250 billion by 2017. 
Another insight discussed in “Digital Business says that, over 60 per cent of enterprises were expected to have at least half of their infrastructure on cloud-based platforms by 2018.
Enterprises are becoming more receptive than ever, by accepting cloud computing for their product, solutions, applications, and infrastructure. Due to exponential growth of adopting digital business strategy, the IT industry is observing a strong growth of cloud adoption showing a shift from legacy IT services to cloud-based services.
IDC predicts cloud IT infrastructure spending will grow at CAGR of 15.1% from 2014 to 2019, reaching $53.1B billion by 2019.
Cloud adoption necessitates cautious planning, execution and management for the long term to grow the desired results. The process of cloud implementation begins with identifying the right cloud service application or solution provider. Then, data transitioning takes place from the existing servers / web server to the cloud. Finally, a suitable automation tool is adopted to allow data migration, while giving importance to testing; to ensure that the data/software migrated to cloud is working as planned. Businesses have to evaluate the appropriateness of cloud deployment models available – private cloud, public cloud and hybrid cloud (an integrated cloud with both private and public clouds) and choose the best option to bring in line with their business goals. Similarly, there are three cloud service models namely SaaS, PaaS, and IaaS to be evaluated.
Gartner predicts that the highest growth will come from cloud system infrastructure services (infrastructure as a service [IaaS]), which is projected to grow 45 per cent in 2017.

Cloud testing – Need for it

57% of cloud applications fail due to security failure.
13% of the cloud application fail due to functional & performance failure.
Source – Gartner
From the essential characteristics of cloud computing, one can arrive at the number of risks immediately. The risks in cloud application development can be categorized as below-
  • Load/Performance
  • SecurityCloud-network
  • Availability & Continuity
  • Functionality
  • Testing Data Privacy
  • Compatibility
  • Business Logic
  • Maintainability
  • Interoperability
  • Regulation & Compliance
To determine the required test measures, all risks have to be mapped. Firstly, conduct a product risk analysis to find out the areas that are important to test. We indicate which test measures can be taken to cover the respective risk.

Types of Cloud Testing

Functional Testing: Functional testing is performed for both remote and local applications. It involves testing all features and functions of the system/application. The different types of functional testing are; System testing, integration testing and User Acceptance Testing.
Non-functional Testing: In this, testing is done to ensure that the application meets the specified performance requirements. It includes Security testing, Stress testing, Load testing, Performance Testing, Browser testing, Latency testing, Availability testing, Business Requirement testing, etc.

Cloud Testing Tools

The accurate choice of testing tools really depends on client application architecture, context, and client requirement. Some of the most commonly used cloud testing tools have been given below.
Load Test and Performance Monitoring Tools
  • Perfecto Mobiles, Keynote (Test Center Enterprise)
  • Monitis, Cloudsleuth
  • BrowserMob, CloudTools, GFI
  • LoadStorm, CloudHarmony, InterMapper, BlazeMeter
Web Functional/Regression Test Tools
  • Windmill, QEngine, Soasta CloudTest, Selenium, LoadStorm (Web & Mobile), etc.
Cloud Security Testing Tools
  • Nessus (Detect Vulnerabilities), Wireshark, Nmap, App Thwack (for testing Android, iOS, and web apps on actual devices), Xamarin Test Cloud, etc.
Finally, it is true that the future for all businesses will be cloud computing on a very large scale. Thus, innovative cloud testing approaches and techniques are required to support on demand testing services in a cloud infrastructure.
Indium’s expertise in testing applications hosted in cloud helps to drive higher RoI, and minimize risks. Indium’s robust Test Automation Frameworks help achieve faster time to market.
Visit our website to learn more about our:
Are you working on cloud testing? Please share your experience. Or got a question? Feel free to comment it.

Tuesday 11 April 2017

Day ‘0’ Planning for QA in DevOps


day-0-in-devops

A Gartner survey suggests that 50% were using DevOps by the end of 2016. DevOps, which is the application of agile principles for faster development, ensures faster time-to-market/delivery. While breaking the wall separating development and operations, it also has more regulatory compliance’s to deal with. In addition, it enables, by integrating the QA processes throughout the development cycle, timely detection and quicker correction of defects.
But to be able to do that in a meaningful and relevant manner, the QA strategy needs to be devised right from the word go – from the requirement stage to design, development, release and maintenance, taking care of every aspect of the development life cycle.
Therefore, Day ‘0’ planning for QA in DevOps becomes not just critical but mandatory even. It must be rooted in the question, ‘What could go wrong?’

Preparing for QA Integration

DevOps implementation needs commitment at the management level since it involves momentous decisions right from whether the implementation should be end to end or modular. There are ready-to-use DevOps solutions available from leading vendors for end-to-end development, but the cost can be formidable and may or may not meet the needs of customers. In such cases, the management may opt for customized solutions that meet their unique needs at more affordable costs.
Once the decision is taken, the development, operations and testing teams need to set the objectives of the project as well as capture the storyboard for the different features to enable all three teams to align their perspectives for the success of the project. This is captured in a blueprint that guides the entire life cycle, and also has a deep impact on the testing plan, which in turn will decide the project’s success and ROI. The progress will need to be monitored and reviewed by a neutral team drawn from all the three aspects of DevOps (development, operations and testing) throughout.

QA Inputs

The collaborative teams together devise the blueprint that acts as the beacon for development, operations and testing. For the blueprint design, the testing team provides the following inputs:
  • The processes it needs to follow
  • The tools needed for testing
  • Tools to test for integration
  • Provisioning for build validation scripts
  • Provisioning for automated testing
  • Identifying related processes that can be automated
This will require an understanding of what the development and operations teams require from a QA perspective, what the customer expectation of the product is and preparing for aspects that can be automated and that need manual testing.
It will need laying down of QA objectives, defining the scope, designing the test strategy for QA, creating the schedule, delineating the functions to be tested, allocating resources, and elucidating the deliverables while identifying the dependencies and the risks.

What Can Indium Do

At the heart of DevOps is not only reaching the product quickly to the customer, but also delivering a quality product thatqa-intergration functions well and provides customer satisfaction.
Indium has experience working in the DevOps environment and puts user experience as the theme around which it plans and implements its testing process. It has streamlined its QA processes to be an integral partner in the DevOps process by:
  • Getting QA integrated with DevOps
  • Helping implement QA cycle in DevOps
  • Ensuring seamless integration of all modules for the smooth functioning of the entire product
Even after roll out, it attains maturity over multiple iterations, which is an ongoing process. Indium has codified its learning in its IP-driven test automation framework, iSafe. Successfully implemented in several DevOps environment, it has helped reduce defects by as much as 60 per cent while also reduce the development times by nearly 20-40 per cent.

Benefits

Day 0 planning prepares the DevOps team to better monitor the progress of the project and ensure that the goals are met while taking care of the possible pitfalls, defects and bugs. The involvement of the testing team is especially critical in identifying the possible scenarios and providing the right inputs that help in efficient and effective development and implementation.
The underlying benefit is that with proper integration of testing in DevOps environment right from Day 0, the time to market is reduced. With support from automation framework such as iSafe from Indium, DevOps productivity improves dramatically and cost of quality is lowered. iSafe enables fail-fast, fail-safe and fail-smart through its instant, automated alert on detecting an error, with information relevant to help the development and/or operations team, as relevant.
Over the years, the framework also enables test process automation that can be customised for the DevOps project underway, including:
  • Starting/shutting down hub machines and Servers
  • Setup: Environment/Data/Dependencies
  • Registering and deregistering of nodes
  • Checkout code from repository and build process
  • Manual editing/updates of configuration files like testing.xml, property files, etc.
  • Defect analysis and categorization
Involvement of QA in Day 0 planning helps identify the manual and automated testing needs of the project and recommend ways to reduce the manual component process where possible. This further contributes to reduce the development cycle and meet the DevOps objectives.