Saturday, 19 October 2013

Selenium WebDriver

WebDriver is a web automation framework which allows us to execute tests against different browsers.We can use any programming language(Java,.Net,PHP,Python,Perl,Ruby) to write test scripts.

we can use conditional operations like if-then-else or switch-case

Lets see few similarities between WebDriver and Selenium RC
Both allows us to use programming language in designing test .
Both allow us to run tests against different browsers.

Selenium RC’s architecture is bit more complicated.
We need to launch a application called Selenium Remote Control Server before starting testing
Selenium RC works only using JavaScript for its every command. That means that everything you write is eventually translated into Javascript and run in the browser.
The browser will obey the instructions of Selenium Core, and will relay its response to the RC Server.
The RC Server will receive the response of the browser and then display the results.
RC Server will fetch the next instruction from your test script to repeat the whole cycle.

However WebDriver interacts directly with browser and uses the browser’s own engine to control it,that is what makes webdriver faster.

Remember that WebDriver operates on the OS level. Also remember that different browsers communicate with the OS in different ways. If a new browser comes out, it may have a different process of communicating with the OS as compared to other browsers. So, you have to give the WebDriver team quite some time to figure that new process out before they can implement it on the next WebDriver release.
However, it is up to the WebDriver’s team of developers to decide if they should support the new browser or not.

Note that while Selenium RC has been oficially deprecated, the WebDriver is now being developed rapidly and it still suffers from several child-illnesses and is not in its full strength. That said, using WebDriver, you can do anything Selenium RC can do. And sometimes more. With an occasional minor bug.

I will recommend the following link for any one who is trying to learn webdriver

Monday, 8 April 2013

Test Automation with VSTS

UITest Framework Architecture: How Test Automation Works in VSTS 2010
We have been using VSTS CodedUI Tests for over 3 years now for functional regression testing and have been intrigued by this complex yet simple tool. Yes you read it right complex yet simple because it must have a very complex architecture to support so much in a single tool yet for the user it is very simple and a breeze to work. The mechanics behind any test automation tool are very intricate yet very interesting. The basics remain the same across the tools with differences in the architectural details.Let’s take a plunge into the architectural details of UITest Framework that the testing components of Visual Studio use and understand how an automation tool works. Let’s have a glance at the architecture of CUIT Framework:


Let us go through the various blocks one by one and try to understand their significance starting from the plug-ins.
1.       Plug-ins / Technology Adapters: A plug-in or a technology adapter is a module that understands the corresponding User Interface technology and provides the UI technology specific services to rest of the modules. The role of a technology adapter is to understand the technology for which it is designed and provide services to the rest of the layers especially the abstraction layer. For example, to record/playback user actions on IE we have the Web Plug-ins (MSHTML/DOM) that understands the technology on which IE is based (i.e. MSHTML/DOM). It can thus communicate with IE and the automation tool thus providing a communication medium between the two thereby enabling the record and playback services.
2.       Abstraction Layer: Next up is the abstraction layer which helps abstract rest of the code from various technologies. The abstraction layer has a very important role to play when supporting multiple technologies. This layer sits between the plug-ins and rest of the modules. The record and playback engine speaks to the abstraction layer which makes the engine independent of the technology being automated. The abstraction layer translates everything coming from the plug-ins and feeds the test engine with the input that it can understand and also send instructions back to the plug-in for playback.
3.       Recorder and Playback Modules
Recorder: The recorder first records the raw steps (user actions) and based on the filter\aggregation rules, these raw steps are converted into filtered steps or user-intention.
-          Filter rules are the rules based on which the recorder can filter out any unwanted/unintended actions like back-spaces pressed while typing into and edit-box etc.
-          Aggregation rules are used to club multiple user actions into a single step wherever applicable. Eg. Going to start menu, launching IE and typing URL in the address bar can be aggregated into a single step as it can be performed in a single step while playing back the recording. This is also called as Intent Based recording.
Playback:  The playback module has a rich set of public APIs for the users that they can use to write robust tests. The APIs can be used to interact with the AUT in many ways like performing click action on a button or a hyperlink or selecting an item from a drop-down list. It also has property provider which gives information on properties supported by each control in the AUT and browser services for browser specific operations like navigate to URL, clear cache etc.
4.       The two clients that are available as of today sit on the top layer.
-          Test Runner: The Test Runner uses the UITest framework to do Fast Forwarding for manual tests. The Test Runner interprets (using the interpreter module which actually forms a part of the Test Runner) the recording on the fly and calls appropriate API on the playback side to perform the user actions on the UI of AUT.
-          Coded UI Test (CUIT):The Coded UI Test which effectively is a Visual Studio client generates code out of the recording performed by the Recorder module. It uses information provided to it by property provider for each control to create definitions for the controls (in the AUT) and add appropriate API calls to replicate the user actions performed during recording session. These properties are used to identify the controls in the AUT during playback session. The users can alternatively hand-code the entire CUIT using the rich set of public APIs.
To summarize we can generalize the above discussed components and understand how automation tools work in general (of course the implementation/architectural details will remain different for different tools). Happy Automation!

Friday, 15 March 2013

Testing using PERL

PERL:Practical Extraction and Reporting Language.

There are lot many things you can do with PERL while testing your application, 

PERL is great language to play with (my personal choice), trust me you will again fall in love with your work once you start working with PERL.

In this section, tester can find how effectively and efficiently they can use perl for their testing their applications.

Please see, i am new blogger and my only purpose here is to make Testing more intresting and
All things i am putting here is based on the things i worked on.

I will keep updating  this blogs with more things on PERL .

If you are testing a client server application, You might be interacting with server logs, to check what request client is sending and what response server is sending back to client.

Some times logs are too big to go through all of it and checking for some special message.

You can obviously use some Editor's search option but that  will be tedious job to do.

Here, i will first show you how to find a particular pattern/text in log file.

#PERL script to find a text/Pattern in log file

$Han = "handler";

open(Han,"logs") or die "Unable to open logfile:$!\n";
#Here logs is the name of file you want to open.
print if /\bMSG\b/i;
# searching for text "MSG"  in our log file.
copy -paste the above code in text file and name it as per your choice(i am giving with extension for file as ".pl"
The output you will get will show  all lines containing text "MSG". 

Now if you want to modify it more, so that your output should show how many times that text is present , you can use like this:

#PERL script to search for a text/Pattern and print the number of occurrences of text/pattern

my $val = "filename.trace";
chomp ($val);
my $count=0;
open (FilHan, "$val") || die "wrong filename";
while ($val = <FilHan>)
  while ($val =~ /\bError\b/ig)
print "Total instances present: $count\n\n";
close (FilHan);

Output will be:
Total instances present: 489

Lets see how we can test  webpage using PERL script, written below is the script that will open a webpage and insert your details.

#PERL script to open webpage and fill the login fields 
 use Win32::IEAutomation;
        # Creating new instance of Internet Explorer
        my $ie = Win32::IEAutomation->new( visible => 1, maximize => 1);
        # Site navigation
        $ie->getTextBox('name:', "Email")->SetValue("give_your_username_here");
        $ie->getTextBox('name:', "Passwd")->SetValue("give_your_password_here");
        # Finding button and clicking it
        $ie->getButton('name:', "signIn")->Click;

Sometimes, during testing you need to get information from Server,It will be tedious to open putty and Telnet the server every time and execute the command , script below will help in executing the command onto server and get response,

#PERL script to telnet a server and execute the command 
use Net::Telnet;
$telnet = new Net::Telnet ( Timeout=>10,Errmode=>'die',
Prompt => '/\$ $/i');
$telnet->login('username', 'password');
print $telnet->cmd('who');


Lets see how to compare two huge files and find out 
1).Common between two files
2).Unique in both files.

#PERL script to find common items between two files and also those which are present in one file only.
@both = ();

@second = ();
open FH1, "firstfilename.dat" or die $!;
# each line has become a key
$first{$_} = undef;
close FH1;
open FH2, "secondfilename.dat" or die $!;
push(@both, $_);
delete $first{$_};
push(@second, $_);
close FH2;
#$, = "\n";
print "in Both\n";
print @both;
print "only in second file\n";
print @second;
print "only in first file\n";
print keys(%first);


Please feel free to post your comments for any doubts.

happy testing :)

Keep watching for more stuff on testing with PERL

How to schedule task using Crontab- UNIX

Crontab: It is a  process which executes commands at specific dates and times. You can use this to schedule activities, either as one-time events or as recurring tasks.

You should log in with superuser to add or remove any entries in this file.

With superuser execute 

crontab -e

This will open a file for you, add your task in it and save it.After defined time  the task will be performed.

Suppose you want some shell script to be executed after every minute
make an entry like this

* * * * * /user/tools/

Various Options that can be used with crontab:
crontab [-u user] [-l | -r | -e] [-i] 
specifies the user’s crontab to be manipulated. This is usually used by root to manipulate the crontab of other users or can be used by you to correctly identify the crontab to be manipulated if you have used the su command to assume another identity.
list the current crontab file
remove the current crontab file
edit the current crontab file using the text editor specified by the EDITOR environment variable or the VISUAL environment variable

-iThis option modifies the -r option to prompt the user for a 'y/Y' response before actually removing the crontab.

Below you can see what kind of values  and  range "*" at different positions can have, using it you can customize the execution of your task.

*    *    *    *    *  command to be executed
┬    ┬    ┬    ┬    ┬
│    │    │    │    │
│    │    │    │    │
│    │    │    │    └───── day of week (0 - 7) (0 or 7 are Sunday, or use names)
│    │    │    └────────── month (1 - 12)
│    │    └─────────────── day of month (1 - 31)
│    └──────────────────── hour (0 - 23)
└───────────────────────── min (0 - 59)

Monday, 3 December 2012

Frequently used Unix commands

ls --- lists your files 
  • ls -l --- lists your files in 'long format', which contains lots of useful information, e.g. the exact size of the file, who owns the file and who has the right to look at it, and when it was last modified. 
  • ls -a --- lists all files, including the ones whose filenames begin in a dot, which you do not always want to see. 
more filename --- shows the first part of a file, just as much as will fit on one screen. Just hit the space bar to see more or q to quit. You can use /pattern to search for a pattern.

diff filename1 filename2

chmod options filename
File Compression gzip filename --- compresses files, so that they take up much less space. Usually text files compress to about half their original size, but it depends very much on the size of the file and the nature of the contents. There are other tools for this purpose, too (e.g. compress), but gzip usually gives the highest compression rate. Gzip produces files with the ending '.gz' appended to the original filename.

gunzip filename --- uncompresses files compressed by gzip.

mkdir dirname --- make a new directory

cd dirname --- change directory. You basically 'go' to another directory, and you will see the files in that directory when you do 'ls'. You always start out in your 'home directory', and you can get back there by typing 'cd' without arguments. 'cd ..' will get you one level up from your current position. You don't have to walk along step by step - you can make big leaps or avoid walking around by specifying path names.

pwd --- tells you where you currently are.

grep string filename(s) --- looks for the string in the files. This can be useful a lot of purposes, e.g. finding the right file among many, figuring out which is the right version of something, and even doing serious corpus work. grep comes in several varieties (grepegrep, and fgrep) and has a lot of very flexible options. Check out the man pages if this sounds good to you.

who --- tells you who's logged on, and where they're coming from.
touch--creates a file

whoami --- returns your username. Sounds useless, but isn't. You may need to find out who it is who forgot to log out somewhere, and make sure *you* have logged out.

kill PID --- kills (ends) the processes with the ID you gave. This works only for your own processes, of course. Get the ID by using ps. If the process doesn't 'die' properly, use the option -9. But attempt without that option first, because it doesn't give the process a chance to finish possibly important business before dying. You may need to kill processes for example if your modem connection was interrupted and you didn't get logged out properly, which sometimes happens.

telnet hostname --- also lets you connect to a remote host. Use rlogin whenever possible.

man commandname --- shows you the manual page for the command

nohup---. If you want a background process to continue running  even after you log out, you have to use the 'nohup' command to submit that background command.

Use the following format:
   nohup command &

Notice that you place the nohup command before the command you intend to run as a background process.

For example, suppose you want the grep command to search all the files in your current directory for the string word and redirect the output to a file called word.list, and you want to log out immediately afterward. Type the command line as follows:
   nohup grep word * > word.list &

You can terminate the nohup command by using the kill command.

top---Top shows how much processing power and memory are being used, as well as other information about the running processes.

whereis---It locates the binary, source, and manual page files for a command

cut---This command in unix (or linux) is used to select sections of text from each line of files. You can use the cut command to select fields or columns from a line by specifying a delimiter or you can select a portion of text by specifying the range or characters. Basically the cut command slices a line and extracts the text.
You can use the cut command just as awk command to extract the fields in a file using a delimiter. The -d option in cut command can be used to specify the delimiter and -f option is used to specify the field position.

cut -d' ' -f2 file.txt