Automation Anti-Patterns That You Must Avoid


Regardless of your testing experience or the effectiveness of your automation infrastructure, you should use a robust test design to initiate automation. The lack of strong test design can force testing teams to face a wide range of issues which generate incomplete, inefficient, and difficult-to-maintain tests. In order to make sure the cost efficiency, quality, and delivery are not affected, it is necessary to familiarize with those indicators which represent the performance of tests. To begin with, consider the following automation anti-patterns to improve testing.

Longer Length of Sequence

Often, tests are created with small steps for long sequences, thus their management and maintenance is hard. For instance, while an application which is currently being tested undergoes changes, it is considerably complex to work around these modifications with other tests.

Instead of using a bottom-up approach first, generate a high-level design. In accordance with the respective method, such a design can consist of features like scopes and definitions of test products which are included in the main objective and test objective for different tests. For instance, a product could consist of test cases which test the calculation process of home loan mortgage premiums.

Business-Tests

When testers focus too much on interaction tests, it is possible that they may design weak tests which do not factor major business-level concerns like the interaction of application responses for unusual circumstances.

Testers should emphasize on the use of businesses tests which represent rules, objects, and processes of business alongside the interaction tests. For instance, with a business test, a user can log in, type a few orders, and view the financial information with the help of high-level activities which mask the details of the interaction.

In interaction tests, a user can type name/password combination and assess whether or not the button of submission (submit) is enabled or not—such a test can occur in any business environment type.

Blurred Lines

While it is important to run interaction and business tests, however, keep in mind that they should be run separately. For instance, the rules and lifecycles of business objects along with their processes and calculations must not be combined with interaction test details like confirming the presence of submit button or determining whether or not a login message is displayed after the login process. Such a scenario can make maintenance hard due to the mix. For instance, if the welcome message is generated in a new application version, all the associated tests have to be checked and maintained.

A modular and high-quality test can help testers with the correct vision about how these blurred lines should be avoided to make sure the maintainability and manageability is strong. Test modules are known to carry a well-defined scope with test modules. They prevent checks which are not suitable for their scope and mask comprehensive steps for the UI interactions.

Life Cycle Tests

Most of the global applications work with business objects like products, invoices, orders, and customers. The application lifecycles of such objects are updated, retrieve, create, and delete—known as CRUD. The major issue is that the tests for these lifecycles are hard to find, incomplete, and scattered. Hence, there can be real vulnerabilities in the tests’ scope especially in the case of business objects, particularly when the business objects are ever-changing. For a car rental, while one can try several vans and cars, however, the coverage is lesser for both the buses and motorcycles.

It is easy to design life cycle tests. You can initiate by choosing business objects along with their operations. Such a process can also consist of variations like updating or canceling an order. It is important to remember that life cycle tests are similar to business tests, instead of the interaction tests.

Poorly Developed Tests

Factors like pressure, time, and others can culminate in the creation of shallow test cases which are not good enough to properly test the application. As a result, quality suffers, like missed situations which are not responded. Doing this can also affect the test maintenance, making it more expensive.

More importantly, it is necessary to sync both the testing and test automation for the entire Scrum sprint. These tests and their automation require a greater degree of cooperation which is tougher in case they are still running after the completion of the sprint.

When improved automation architecture and test design are not good enough in keeping up with the velocity, then you can think about the outsourcing of a number of tasks so the testers are able to match the speed.

You have to think in terms of a professional tester—as an individual who has a knack of breaking things. For instance, if different testing methods like error guessing decision tables are used then they can assist to pinpoint those situations which require immediate assistance with the test cases. On a similar note, equivalence partitioning and state-transition diagrams can help to think about different design to test the cases.

Scope

Lack of scope in tests is quite a common problem like when an entry dialog has to be tested by a test or with a group of financial transactions. It is easy to find the tests and also update in case of changes in the application while duplication is also possible.

Duplicate Checks

Testers usually use to assess steps by using an unexpected output for all the steps; this is also encouraged by different management tools. What this indicates is that separate tests can be used for the same check. For instance, the previous test determined whether or not the welcome screen is generated after the login.

Begin with a test design on which the testers focused heavily. You have to ensure that all the modules in the test have well-differentiated and clear scopes. Hence, during the development of such tests, you have to avoid checking after each step. Add checks according to the scope.

Advertisements

AWS S3 Tips for Performance


Amazon S3 is used by many companies for storage purposes. Due to its use as object storage, it offers flexibility with a slew of data types including small objects to massive datasets. Thus, S3 has carved out its niche as a great service which can store a broad scope of data types via a resilient and available environment. As your objects of S3 have to be accessed and read by other AWS services, applications, and end users, do you believe that they are optimized to offer the best possible performance? Here is how you can optimize your S3. Follow these tips to improve your performance with Amazon S3.

Perform TCP Window Scaling

TCP window scaling facilitates developers to improve the network throughput performance via the modification of the header in the TCP packet that uses a window scale. This helps in sending data with a single segment—more than the traditional 64 KB. It is important to note that such practice is exclusive to S3; it functions along with the protocol level. As a result, by using this protocol you can execute window scaling for your client while establishing a connection with a server.

When a connection is established between a destination and source by the TCP, then the next thing is a 3-way handshake that starts up from the source. This means that with the S3 view, it is possible that the client might be required to upload an object to the S3. However, prior to this, you must create a connection with the S3 servers.

A TCP packet will be sent by the client along with a defined window scale of TCP in the header—such a request is also referred to as SYN request—the first part of the 3-way handshake. When the S3 gets this request, it uses an SYN/ACK message to send a response to the client while using the same window scale factor—this forms the second part of the 3-way handshake—maintaining the relevant window scale factor. The third and final part consists of an ACK message which is sent to the S3 server—it serves as the response’s acknowledgment. A connection is generated after this 3-way handshake ends where the S3 and client can finally exchange data.

In order to send more data, you can widen the window size via a scale factor. This helps in sending voluminous amount s of data via a single segment whereas your speed is also quickened.

Use Selective Acknowledgment

At times while using TCP, it is not uncommon for packets to get lost. To figure out which of the packets went missing is hard within a TCP window. Consequently, at times it is possible to resend all of these packets. However, then the receiver may have received some of the packets so this is an ineffective strategy.

Instead, you can use TCP SACK (selective acknowledgment) for improving performance where the sender receives notifications about which were the failed packets for a window. As a result, the sender can then easily only resend the failed packets.

However, it is necessary that the source client or the sender initiates the SACK when a connection is being established amidst the handshake’s SYN phase. Such an option is also called as SACK-permitted. For using and implementing SACK, you can visit this link.

Setting Up S3 Request Rates

Alongside TCP SACK and TCP Scaling, S3 is quite nicely optimized to address a high request throughput. A year back, in 2018, AWS introduced a new change for these request rates. Before the announcement, it was recommended that the prefixes can be randomized within the bucket to help with performance optimization—there is no more need for it. Now exponential growth of request rate performance can be achieved when more than one prefixes within the bucket are used.

Now, developers are getting a 3,500 PUT/POST/DELETE request for each second while they are also achieving 5,500 GET requests. A single prefix is a reason behind such limitations. However, keep in mind that there is no limit for prefixes which are to be used in an S3 bucket. What this means is that if you use 2 prefixes then you can get 110,000 GET requests and 70,000 PUT/POST/DELETE for each passing second for the same bucket.

There is no hierarchical-based structure in the folders of S3; it follows a flat structure for storage. This means that all you need is a bucket while all the objects are saved in a flat space of the bucket. You can generate folders and store objects in it—without depending on a hierarchical system. The prefixes of the object are responsible to make them unique. For instance, in case you have the following objects in a bucket:

  • Design/Meeting.ppt
  • Objective/Plan.pdf
  • jpg

Here, the ‘Design’ folder serves as a prefix for identifying the object—such a pathname is also referred to as the object key. The ‘Objective’ folder is also an object’s prefix while the ‘Will.jpg’ is without any prefix.

Amazon Cloud Front

One more strategy for optimization is to integrate Amazon CloudFront with Amazon S3. This is a wise strategy when the request of the S3 data is a GET request. CloudFront is a content delivery network which increases the pace of the distribution of dynamic and static content across a global network comprising of edge locations.

Typically, after a user sends a request from S3 in the form of GET request then the S3 service is used to route it while the relevant servers return the content. In case you use CloudFront with S3 then it can also perform caching for those objects which are requested commonly. Hence, the user’s GET request is directed to the nearest edge location that offers low latency for returning the cached object and providing the best performance. It also decreases the AWS costs when the number of GET requests in the buckets is shortened.

 

Best Code Review Practices


How do you run code reviews? Code reviews are vital and enhance the quality of code. They are responsible for the stability and reliability of the code. Moreover, they build and foster relationship among team members. Following are some of the tips for code reviews.

1.   Have a Basic Understanding of What to Search in the Code

To begin with, you should have a basic idea about what you are looking for. Ideally, you should look for major aspects like the following.

  • What structure has the programmer followed?
  • How is the logic building so far?
  • What style has been used?
  • How is the code performing?
  • How are the test results?
  • How readable is the code?
  • Does it look maintainable?
  • Is it ticking up all the boxes for the functionality?

You can also perform static analysis or other automated checks to evaluate the logic and structure of your code. However, some things are best reviewed from a pure manual check like functionality and design.

Moreover, you also have to consider the following questions for the code.

  • Are you able to understand how the code works and what does it do?
  • Is the code following the client requirement?
  • Are all modules and functions running as expected?

2.   Take 60-90 Minutes for a Review

You should avoid spending too much time for reviewing a codebase in a single sitting. This is because after a 60-minute time interval, a code reviewer naturally begins sensing tiredness and does not have the same physical and mental strength to pick out defects from the code. Such state is supported by proofs from other studies as well. It is a common fact that whenever human beings commit themselves to an activity which needs special attention, their performance begins to experience a dip after 60 minutes. By following this time period, a reviewer can at best review around 300 to 600 lines of code.

3.   Assess 400 Lines of Code at Max

According to a code review by Cisco, for best results, developers should conduct a code review which extends from 200 to 400 LOC (lines of code) at a time. After this time period, the capability to identify bugs begins to wither way. Considering an average review stretches up to 1.5 hours, you can get a yield in between 70-90%. This means that if there were 10 faults in the code, then you maybe successful to find at least 9 of them.

4.   Make Sure Authors Annotate Source Code

It is possible that authors of the code can remove almost all of the flaws from the code before a review is required. This can be done if it is mandatory for the developers to re-check their code, thus making reviews end faster while the code quality remains unaffected as well.

Before the review, authors can use annotation in their code. With annotations the reviewer can them go through all the modifications, see what to look first and assess the methods and reasons for all the changes in the code. Thus, these notes are not merely code comments but they serve as a guide to the reviewers.

Since authors have to re-assess and explain their modifications while annotating the code, therefore it can help to show different flaws in the code prior to the beginning of the review. As a result, the review achieves a greater level of efficiency.

5.   Setup Quantifiable Metrics

In the beginning, you have to come up with the goals for the code review and brainstorm how to assess the effectiveness of the code. After certain goals are defined, it can help to reflect better whether or not the peer review is providing the required results.

You can use external metrics like “cut upon the defects from development by 50%” or “decrease support calls by 15%”. Therefore, you can get a better picture of how well your code is performing from an external outlook. Moreover, a quantifiable measurement is wiser rather than having an unclear objective to “resolve more bugs”.

It must be noted that the results of the external metrics are not realized too early. For instance, there will be no changes to the support calls till the release of the new version and users can use the software. Therefore, you should also judge internal process metrics for getting the number of defects, study the points which cause issues, and get an idea about how much time is being spent by your developer on reviewing their code. Some of the internal code review metrics are the following.

  • Inspection rate: It is measured in kLOC (thousands of lines of code) per work hour and represents the time required for reviewing the code.
  • Defect rate: It is measured in number of defects discovered for each hour. It represents the process to discover defects
  • Defect density: It is measured in number of defects for each kLOC. It represents the number of defects which are discovered in a code.

6.   Create Checklists

Checklists are vital for the reviews due to the fact that help the reviewer to keep tasks in mind. They are a good option to evaluate those components which might be forgotten by you. They are not only effective for reviewers but they can also aid the developers.

One of the tough defects to highlight is omission as it is obviously tough to identify a piece of code which was never added. A checklist is one of the best ways to address this issue. With checklist both the reviewers and author can verify that the errors have been resolved, the arguments in the function are tested to run with invalid values, and the required unit tests are generated.

Another good idea is to use a personal checklist. Each developer often repeated the same errors in their code. If the creation of personal checklist is enforced then it can help the reviewers.