Page Object Model using Selenium WebDriver
Page Object Model using Selenium WebDriver
Author: Shiffy Jose
A Test Automation Framework is a collection of re-usable methods which helps in automating test process. The three most common automation frameworks for Selenium WebDriver available in the market are data-driven framework, hybrid framework and page object model.
Page Object Model (POM) is one of the robust design patterns in automating test cases using Selenium WebDriver when compared to other frameworks. POM framework reduces or even eliminates duplicate test code, supports code re-use and we can create tests with less keystroke. In this design pattern, each web page in an application has a corresponding page class. This confirms the better maintainability of such automation codes. An implementation of the Page Object Model can be achieved by separating the test object, the test scripts and externalization of data.
The process of the framework can be divided into the following steps. First step will be separating data from the test code. Data should be kept in separate file like Excel, Property Files or any type of file which can be varied from project to project. Once the data separation is completed, test automation can proceed further on creating the base page class where all the initializations are to be performed. This is flowed by creating page class corresponding to each web page. All the functions/methods relating to each page is written in the corresponding page class (this avoids duplication of code). So now is the time for test execution.
Test Execution is performed with the help of JUnit or TestNG. Each test case is taken into consideration then methods from the dependant page classes are called and assert method is invoked to decide if the test case is a pass/fail. This is the overall picture for the POM. For overall test case execution, we could even use build tools like ANT or Maven. Test Report for the entire test cycle can be generated using ‘XSLT Reports’ which provides complete test results.
Behavior Driven Development
Serenity is an open source ATDD (Acceptance test driven development) framework that helps you to write better structured, more maintainable automated acceptance and regression web based tests faster using Selenium 2 WebDriver. Serenity rethinks and extends the potential of ATDD; first by turning automated web tests into automated acceptance criteria, and then by documenting those criteria for collaborative, multidisciplinary teams. Serenity also uses the test results to produce illustrated, narrative reports that document and describe what your application does and how it works. Serenity tells you not only what tests have been executed, but more importantly, what requirements have been tested. One key advantage of using Serenity (Serenity) is that you do not have to invest time in building and maintaining your own automation framework. We can write serenity ATDD framework in simple and plain English that can be easily understandable by all the stakeholders in your project. Different stakeholders are able to view acceptance test at different levels.
We can use the different Behavior-Driven-Development BDD tools or frameworks like Cucumber or JBehave, or simply JUnit with Serenity library. We can also integrate Serenity with requirements stored in an external source such as JIRA or any other test cases management tool, or just use a simple directory-based approach to organize your requirements.
Following are the components of Serenity BDD.
- Requirement
- Page Objects
- Steps
- Tests
After getting the requirements we create user stories based on requirements. The page object model we used for the web elements on applications pages. The next level of abstraction allows us to unite some methods from Page objects together in Steps methods. Finally the methods from the Steps level entities can be used for an automation tests development. And as a result of the Tests runs you will get detail Reports. In Serenity the test scenarios are present in the user stories file in @Given, @When, @Then format in the src/test/resources/stories directory. The image below explains Architecture of Serenity:
Reporting is one of Serenity’s fortes. Serenity not only reports on whether a test passes or fails, but documents what it did, in a step-by-step narrative format that includes test data and screenshots for web tests. Following is the test report generated, when I ran my sample project with JBehave and Serenity. Note how much better the Serenity report is compared to the standard JBehave report.
Conclusion
Serenity is very rich and advanced automated acceptance and regression web based tool. Other BDD framework like JBehave and Serenity make a great combination. Serenity makes it easier to report on progress using automated scenarios, not just as a reporting tool, but also by tracking the relationship between the requirements that have been defined and the corresponding acceptance tests.
Hope you enjoy ready….have a great day!!
Swati Karwa
Automation Test Analyst
Adactin Group
Adactin to diversify service offerings
Software testing services company, Adactin, will soon be diversifying its offerings. Instead of just offering testing services, the company will be including SI integration services to its portfolio.
Adactin CEO, Navneesh Garg, who started the Australian Deloitte Fast 500 company, told ARN that the new offering is based on market demand.
“Our customers want us to grow in other areas as well. We have done some work in that space but we want to amplify it further. 10 per cent of our work this year has been in the integration services space so we want to take it as a different business unit.
“We want to set it up as a separate organisation so we can offer a more end-to-end solution to customers moving forward,” he said.
Adactin is a reseller for HP and IBM. It also has Neotys, Acunetix, and RadView under its portfolio. The company also does local work with NEC, Quay Consulting, and RXP Services. Adactin also has a partnership with Ingram Micro in Australia, distributing HP products which Adactin then takes to market.
According to Garg, the company also intends to increase its play in the Australian channel. It currently has a big play in the Federal Government and NSW Government space.
“Besides the government, we do a lot of work for the energy sector, financial companies, and IT vendors. Having said that, our solutions are vertical agnostic so our focus is on ERP implementation and CRM. Our core focus is services.”
With most of its customers being mid-segment companies or organisations, Garg said getting more enterprise segment customers is on its to-do list.
“The enterprise segment space is always competitive. But we see a gap in the market and we’re trying to bridge the gap between mid-segment and enterprise.
“We’re also seeing an increase in demand for local resources. Customers that used to outsource or offshore solutions are now seeking on-site or on-shore resources. They want more local representation. A lot of the changes we’re making are driven by this trend,” he added.
Adactin has offices in Paramatta, Melbourne, Canberra, and Auckland. Adactin currently employs 60 people across its four operations.
“Sydney has been our core market for the past few years but we’re now looking at growing our Canberra, Melbourne, and Auckland markets. The Sydney market still has a lot of scope for us but we want to extend business to these regions,” Garg said.
Garg started Adactin in Australia more than five years ago. Previously, he was working in India for Adobe Systems, HCL, and CresTech Software Systems.
Performance Testing on AWS Cloudwatch
Amazon CloudWatch is a service used for monitoring servers which run on Amazon Web Services (AWS). It enables users to have a real-time monitoring of AWS resources such as Amazon EC2 instances, Amazon EBS (Elastic Block Store), ElastiCache and Amazon RDS database instances. AWS provides metrics for CPU utilization, latency, and request counts; allowing users to custom the metrics to be monitored, such as the memory usage, transaction volumes or error rates. This application allows users to view their application behaviour with the statistics in a graph format.
AWS management console provides access to Amazon Cloudwatch. Within the console, we use the EC2 matrix. A matrix is created for each instance. An instance will have a unique instance id and an instance name and a metric name. In this article we will focus on the CPU Utilization Metric.
There are different alarms that can be created to monitor the metrics in your application. When we create an alarm we have to first decide on the Amazon CloudWatch metric which we want to monitor. Metric can be like the CPU utilization, throughput, queue length, etc. Next, we choose the evaluation period (e.g., 1min, 5 mins, 15 mins, 1hour, or 1 day) and a statistical value to measure (e.g., Average, Min, Max, or Sum). To set a threshold, set a target value and choose whether the alarm will trigger when the value is greater than (>), greater than or equal to (>=), less than (<), or less than or equal to (<=) that value.
Following is the Graph for CPU Utilization. The evaluation period is 5 mins. The trigger is CPU Utilization > 50 all through the evaluation period. This will remain in the alarm state until it no longer breaches the set threshold.
Pooja Dogra
Test Analyst
Adactin Group