Java API: Backward Compatibility

What is a backward compatibility?

– Deepthi Halbhavi-

Backward or downward compatibility in Java API is a property of an API that allows older API usages to function without breaking their existing implementation when an API is modified.

An API Interface

Let’s look at the simple API example below.

Example:

The API consists of only one interface Foo with one method bar(), so the user that implements this API will need to implement the interface Foo.

The following class FooImpl implements Foo and implements bar() method.

Simple implementation for a simple API. But the things get trickier when a simple API is modified to accommodate new requirements.

Let’s check that out.

Modifying an interface

Now, let’s modify the Foo interface and add a new method fooBar() and see what are its implications on the implementation which is FooImpl!

A new method is added in the Foo interface and if the implementation is using the same API then it will fail in two ways.

  1. The existing implementation will not
  2. The existing implementation will not be able to use the new API and may not be able to use new features. Hence the implementation is stuck with the older version because of no backward

Approaches to overcome the above issues

Implement new API

Just implement the new API.

As we saw above, the API is modified so, FooImpl will just implement the newly added method fooBar().

Pros:

  • Keep up-to-date with the API

Cons:

  • Enforces implementation to be added for new the API which may not be required at the time of upgrade
  • Not all the API users would want to implement the new API
  • It is not backward compatible

Interface versioning

In this approach, any new modifications to an interface are put in a new interface. Which means create a new interface that extends the existing one and suffix it with, say _V1

Let’s consider the previous example and apply this approach.

This is the original interface:

A versioned interface extends the previous one

Now the implementation has a choice of how to implement when to implement. For ex:

  1. FooImpl implements new interface which is Foo_V1
  2. FooImpl implements Foo and FooImpl_V2 extends FooImpl and implements Foo_V1

 

  1. Do not implement the new interface Foo_V1 at all if there is no requirement to implement it yet

Pros:

  • Both of the aforementioned issues are resolved
  • Implementations have choice which version to use
  • Implementations have choice whether they want to implement new version at all
  • Fully backward compatible
  • Extend the previous implementation for the new API version

Cons:

  • Increase in the number of interfaces
  • If FooImpl_V1 is versioned and extends previous implementation then it cannot extend any other class even if you wanted it
  • May not keep up-to-date with the new
  • Need to be aware of the new API changes because it won’t complain in a form of compilation errors, hence leading to missing API

Using default method (Java 8 and above)

Java 8 introduce default methods in interfaces to provide backward compatibility. default methods allow interfaces to have implementation without affecting classes that implement such interfaces.

So, let’s use this in our example API to make it backward compatible

The default void fooBar() method provide default implementation and it won’t affect the FooImpl

class that implements Foo.

Pros:

  • Both of the aforementioned issues are resolved
  • Fully backward compatible
  • Keeps up-to-date with the new API with default implementation
  • Implementations have choice whether they want to implement new version at all
  • The implementation can extend another class

Cons:

  • Need to be aware of the new API changes because it won’t complain in a form of compilation errors, hence leading to missing API features

Three National Awards: What’s the Secret Sauce?

Over the last three weeks Adactin has been recognised in three very different forums for its performance. First, at the end of October the Australian Financial Review (AFR) placed the company in its ranking of Fast 100 companies, based on metrics it has monitored over the last three years. And then in rapid succession in mid November we were recognised by our industry peers by being placed second in the CRNs Fast 50 and 16th in the Deloitte Technology Fast 50. The CRN was specifically focused on the ICT sector while the Deloitte award is more catholic in its technology tastes. All awards are national awards.

It’s tempting to think that there is a secret formula for achieving the sort of company growth which the AFR, CRN and Deloitte have recognised. There is a formula of sorts but it’s not a secret one. In fact you might find it a boring formula and will want to tell us you have heard it all before. Maybe so, but for Adactin the following tactics lie at the heart of our growth strategy. None make any difference on their own, but when woven together they make for a compelling engagement with our customers and that is what we are all about.

Above everything else we strive for a reputation for excellence in customer service and to comprehensively meet specific customer needs, even when those needs appear at the last minute.

It’s similar but critically different – in delivering excellent customer service we measure our success by the degree of customer satisfaction achieved. And that measure is not simply a customer saying ‘well done’ or ‘we love your work’. It’s measured by customer retention. Will a customer stay with Adactin and place more work with us when the original task is complete? Most do.

Customers are people too. They need to be heard, and they need to know we understand what we have heard. So we take extra care when we bring people onto the Adactin team. It’s not enough to be a good technologist. Our team members need to be careful listeners, empathetic partners and disciplined practitioners too. Care in the process of vetting our staff and contractors prior to assignment on projects pays customer satisfaction dividends. Oh, it also means we have a spectacularly low staff churn rate.

Having identified the right people we are able to focus on speed of execution, and aim to deliver a response in days rather than weeks or months. When we combine this with innovative delivery models we are well on the way to exceeding client expectations, given these have been shaped by large competitors who are most often weighed down by complex processes.

The speed of response is enabled and reinforced by our membership on key service and procurement panels, both in the federal government and state government. A potential customer can take some due diligence assurance from our presence on these panels, but most importantly the panels shorten the time it takes to have a solution directed at a problem. That all goes to the drive for excellence in customer service. Adactin is a member of Federal Government panels such as the ATO, DTA and Defence, and is on the NSW Government 0020 and 0007 Panels.

Finally we stay on the wave of emerging technologies like digital transformation, (big) data analytics and artificial intelligence. These technologies increasingly solve problems for customers in short periods of time and for dramatically less cost than traditional technologies. We diligently apply ourselves to areas where we can harness new technologies to achieve customer satisfaction.

So, as you can see, there is no secret sauce in what we do. And it may seem like its textbook stuff. But the growth numbers speak for themselves and we are very pleased the AFR, CRN and Deloitte have been listening. Each award applauds this strategy and all those in Adactin who have been so focused on making our customers successful.

Page Object Model using Selenium WebDriver

Page Object Model using Selenium WebDriver

Author: Shiffy Jose

A Test Automation Framework is a collection of re-usable methods which helps in automating test process. The three most common automation frameworks for Selenium WebDriver available in the market are data-driven framework, hybrid framework and page object model.

Page Object Model (POM) is one of the robust design patterns in automating test cases using Selenium WebDriver when compared to other frameworks. POM framework reduces or even eliminates duplicate test code, supports code re-use and we can create tests with less keystroke. In this design pattern, each web page in an application has a corresponding page class. This confirms the better maintainability of such automation codes. An implementation of the Page Object Model can be achieved by separating the test object, the test scripts and externalization of data.

The process of the framework can be divided into the following steps. First step will be separating data from the test code. Data should be kept in separate file like Excel, Property Files or any type of file which can be varied from project to project. Once the data separation is completed, test automation can proceed further on creating the base page class where all the initializations are to be performed. This is flowed by creating page class corresponding to each web page. All the functions/methods relating to each page is written in the corresponding page class (this avoids duplication of code). So now is the time for test execution.

Test Execution is performed with the help of JUnit or TestNG. Each test case is taken into consideration then methods from the dependant page classes are called and assert method is invoked to decide if the test case is a pass/fail. This is the overall picture for the POM. For overall test case execution, we could even use build tools like ANT or Maven. Test Report for the entire test cycle can be generated using ‘XSLT Reports’ which provides complete test results.

 

Behavior Driven Development

Serenity is an open source ATDD (Acceptance test driven development) framework that helps you to write better structured, more maintainable automated acceptance and regression web based tests faster using Selenium 2 WebDriver. Serenity rethinks and extends the potential of ATDD; first by turning automated web tests into automated acceptance criteria, and then by documenting those criteria for collaborative, multidisciplinary teams. Serenity also uses the test results to produce illustrated, narrative reports that document and describe what your application does and how it works. Serenity tells you not only what tests have been executed, but more importantly, what requirements have been tested. One key advantage of using Serenity (Serenity) is that you do not have to invest time in building and maintaining your own automation framework. We can write serenity ATDD framework in simple and plain English that can be easily understandable by all the stakeholders in your project. Different stakeholders are able to view acceptance test at different levels.

We can use the different Behavior-Driven-Development BDD tools or frameworks like Cucumber or JBehave, or simply JUnit with Serenity library. We can also integrate Serenity with requirements stored in an external source such as JIRA or any other test cases management tool, or just use a simple directory-based approach to organize your requirements.

Following are the components of Serenity BDD.

  • Requirement
  • Page Objects
  • Steps
  • Tests

After getting the requirements we create user stories based on requirements. The page object model we used for the web elements on applications pages. The next level of abstraction allows us to unite some methods from Page objects together in Steps methods. Finally the methods from the Steps level entities can be used for an automation tests development. And as a result of the Tests runs you will get detail Reports. In Serenity the test scenarios are present in the user stories file in @Given, @When, @Then format in the src/test/resources/stories directory. The image below explains Architecture of Serenity:

Reporting is one of Serenity’s fortes. Serenity not only reports on whether a test passes or fails, but documents what it did, in a step-by-step narrative format that includes test data and screenshots for web tests. Following is the test report generated, when I ran my sample project with JBehave and Serenity. Note how much better the Serenity report is compared to the standard JBehave report.

Conclusion

Serenity is very rich and advanced automated acceptance and regression web based tool.  Other BDD framework like JBehave and Serenity make a great combination. Serenity makes it easier to report on progress using automated scenarios, not just as a reporting tool, but also by tracking the relationship between the requirements that have been defined and the corresponding acceptance tests.

Hope you enjoy ready….have a great day!!

 

Swati Karwa

Automation Test Analyst

Adactin Group

Performance Testing on AWS Cloudwatch

Amazon CloudWatch is a service used for monitoring servers which run on Amazon Web Services (AWS). It enables users to have a real-time monitoring of AWS resources such as Amazon EC2 instances, Amazon EBS (Elastic Block Store), ElastiCache and Amazon RDS database instances. AWS provides metrics for CPU utilization, latency, and request counts; allowing users to custom the metrics to be monitored, such as the memory usage, transaction volumes or error rates. This application allows users to view their application behaviour with the statistics in a graph format.
AWS management console provides access to Amazon Cloudwatch. Within the console, we use the EC2 matrix. A matrix is created for each instance. An instance will have a unique instance id and an instance name and a metric name. In this article we will focus on the CPU Utilization Metric.
There are different alarms that can be created to monitor the metrics in your application. When we create an alarm we have to first decide on the Amazon CloudWatch metric which we want to monitor. Metric can be like the CPU utilization, throughput, queue length, etc. Next, we choose the evaluation period (e.g., 1min, 5 mins, 15 mins, 1hour, or 1 day) and a statistical value to measure (e.g., Average, Min, Max, or Sum). To set a threshold, set a target value and choose whether the alarm will trigger when the value is greater than (>), greater than or equal to (>=), less than (<), or less than or equal to (<=) that value.

Following is the Graph for CPU Utilization. The evaluation period is 5 mins. The trigger is CPU Utilization > 50 all through the evaluation period. This will remain in the alarm state until it no longer breaches the set threshold.

blog_aws

 

 

 

 

 

 

 

 

 

Pooja Dogra

Test Analyst

Adactin Group