05/08/2011

JAVA HeapSort

import java.io.*;
class heap_sort
{
static int heapsize;
public static void main(String args[])throws IOException
{
BufferedReader br=new BufferedReader(new InputStreamReader(System.in));
System.out.println("Heap size??");
int n=Integer.parseInt(br.readLine());
int i,a[]=new int[n],temp;
for(i=0;i<n;i++)
a[i]=Integer.parseInt(br.readLine());
heapsize=a.length;
maxheap(a);
for(i=n-1;i>=1;i--)
{
temp=a[0];
a[0]=a[i];
a[i]=temp;
heapsize--;
maxheapify(a,0);
}
}
public static void maxheap(int a[])
{
int i;
for(i=(heapsize-1)/2;i>=0;i--)
a=maxheapify(a,i);
}
public static int[] maxheapify(int a[],int i)
{
int l=2*i+1;
int r=2*(i+1);
int largest,temp;
if(l<heapsize&&a[l]>a[i])
largest=l;
else
largest=i;
if(r<heapsize&&a[r]>a[largest])
largest=r;
if(largest!=i)
{
temp=a[i];
a[i]=a[largest];
a[largest]=temp;
maxheapify(a,largest);
}
return a;
}
}

03/08/2011

How to calculate the time complexity???

In this post I am going to discuss how to calculate time complexity of the algorithms.Firstly I am going to take up a non-recursive,simple algorithm and then look at the recursive ones.
What can be simpler than an algorithm to add two numbers.The method Sum does it for us:
1.Sum(a,b)
2.    s=a+b         // c1*1
3.    return s        //c2*1

The comments indicates the total time required by the respective statements at run-time.c1 and c2 are the time required to process the statement.It depends upon the processor speed which may be different for different computers while the 1's is the number of times(i.e. frequency) ,the statement is executed. Hence , the total time required is c1+c2 which is equal to some constant c.In algorithimic notations we express this constant function as O(1).

Now lets look at another algorithm.This time,the sum of first n natural numbers.

1.Nat_Sum(n)
2.    s=0                //c1*1
3.    for i=1 to n     //c2*n
4.          s=s+i        //c3*n 
5.    return s           //c4*1

Now just count the total time required.Its c1+c4+n(c2+c3) which is equal to some other constants e1+e2(n).
Hence in Big O notation it is expressed as O(n) removing all constants.So it is linear algorithm.

Similarly when there are two nested loops, the time complexity is generally O(n^2).
We will look at the recursive algorithms in the next post. 

31/07/2011

JAVA Rod Cutting Problem

import java.util.*;
class rod_cutting
{
public static void main(String args[])
{
Scanner in=new Scanner(System.in);
int n=in.nextInt(),i,j,q;
int p[]=new int [n+1];
for(i=1;i<=n;i++)
p[i]=in.nextInt();
int r[]=new int[n+1];
r[0]=0;
for(j=1;j<=n;j++)
{
q=Integer.MIN_VALUE;
for(i=1;i<=j;i++)
{
if(p[i]+r[j-i]>q)
q=p[i]+r[j-i];
r[j]=q;
}
}
}
}

30/07/2011

JAVA Quick Sort

import java.io.*;
class quick_sort
{
public static void main(String args[])throws IOException
{
BufferedReader br=new BufferedReader(new InputStreamReader(System.in));
System.out.println("Enter the no.of elements");
int n=Integer.parseInt(br.readLine());
int i,a[]=new int[n];
for(i=0;i<n;i++)
a[i]=Integer.parseInt(br.readLine());
a=quicksort(a,0,a.length-1);
for(i=0;i<n;i++)
System.out.print(a[i]+"  ");
}

public static int[] quicksort(int a[],int p,int r)
{
int q;
if(p<r)
{
q=Partition(a,p,r);
a=quicksort(a,p,q-1);
a=quicksort(a,q+1,r);
}
return a;
}

public static int Partition(int a[],int p,int r)
{
int x=a[r];
int i,j,temp;
i=p-1;
for(j=p;j<=r-1;j++)
{
if(a[j]<=x)
{
i++;
temp=a[j];
a[j]=a[i];
a[i]=temp;
}
}
temp=a[i+1];
a[i+1]=x;
a[r]=temp;
return (i+1);
}
}

Quick Sort Algorithm

Lets see yet another algorithm of sorting items.This time the quick sort which is one of the faster(though not the fastest) but the most popular sorting technique.This is used my most of the programmers.This is because although the worst case time of quicksort is theta(n^2) the average case complexity is remarkable ..theta(n lg n).where lg is of base 2.So lets see how it works.
Quick sort like merge sort works on the divide and conquer approach.Let an array A[p..r] be available for sorting:
divide-partion the array A into two(possibly empty) subarrays A[p..q-1] and A[q+1..r] such that all items in subarray A[p..q-1] are less than A[q] and all elements in A[q+1..r] are greater than A[q].So we have to compute the index q in this procedure as well.
Conquer-Here we sort the two sub-arrays by recursive method.
Conquer- Since the two sub-arrays are already sorted nothing's to be done here.

I think the more important part is the division part.So lets take that first:
Let the method Partition(A,p,r) take array A and the starting and ending indices p and r as arguments.What we do is that we make the last element of the array (i.e. a[r]) as a pivot and we compare each of the previous elements,thus finding the correct position of element a[r].The procedure looks like this-

Partition(A,p,r)
x=a[r]                      //make the last element a pivot
i=p-1                       //start from 1 step behind    
for j=p to r-1
     if a[j]<=x
          i=i+1             //counting the positions which are less than the pivot element 
          swap a[i] and a[j]
 exchange a[i+1] and a[r]  //put a[r] into postion q(its correct position.
return i+1                         // the position q.


The partioning procedure...All light shaded elements are less than the pivot and all deep shaded are greater than it.
Now we just have to make a recursive function that sorts the array with the help of the Partition method.

QuickSort(A,p,r)
if p<r
    q=Partition(A,p,r)
    QuickSort(A,p,q-1)
    QuickSort(A,q+1,r)

To sort the entire array the initial call should be QuickSort(A,1,A.length).The average case time complexity is theta(n lg n).How we get this ,I am going to discuss later in the  Complexity section.For now you can see the JAVA code in the respective section.

JAVA-Merge Sort

import java.io.*;
class merge_sort
{
public static void main(String args[])throws IOException
{
BufferedReader br=new BufferedReader(new InputStreamReader(System.in));
System.out.println("Enter the no.of elements");
int n=Integer.parseInt(br.readLine());
int i,a[]=new int[n];
for(i=0;i<n;i++)
a[i]=Integer.parseInt(br.readLine());
a=merge(a,0,a.length-1);
for(i=0;i<n;i++)
System.out.print(a[i]+"  ");
}
public static int[] merge(int a[],int p,int r)
{
int q;
if(p<r)
{
q=(p+r)/2;
merge(a,p,q);
merge(a,q+1,r);
a=mergesort(a,p,q,r);
}
return a;
}
public static int[] mergesort(int a[],int p,int q,int r)
{
int l[]=new int[q-p+2];
int R[]=new int[r-q+1];
int i,j;
for(i=0;i<l.length-1;i++)
l[i]=a[p+i];
for(j=0;j<R.length-1;j++)
R[j]=a[q+j+1];
R[j]=Integer.MAX_VALUE;
l[i]=Integer.MAX_VALUE;
i=j=0;
for(int k=p;k<=r;k++)
{
if(l[i]<=R[j])
{
a[k]=l[i];
i++;
}
else
{
a[k]=R[j];
j++;
}}
return a;
}
}

29/07/2011

Time Complexity

Time Complexity of an algorithm is the function that tells the amount of time which the algorithm should take in terms of the size of the input of the algorithm."Time" here means the processing time taken during the run-time of the algorithm.Thus all step counts are added to calculate the total run-time required.This may be a constant or some function of n.
There are certain asymptotic notation with which we express the complexities.I will deal with that in my next post.Here I will focus more on the type of time complexities.
Since different algorithm may take a different amount of time even on inputs of the same size, the most commonly used measure of time complexity, the worst-case time complexity of an algorithm, denoted as T(n), is the maximum amount of time taken on any input of size n. Time complexities are classified by the nature of the function T(n). For example an algorithm may be linearly dependent on the input size or it may be cubically or even exponentially.We express this as O(g(n)) read as "Ohh(g(n))". If there are two algorithms written for the same problem one of time complexity O(n) and other of suppose O(n^2) and the input size is made 4 times , then the latter will take (4)^2 that is 16 times more time it was taking previously while the linear algorithm will take just 4 times more time.Hence it is a better algorithm.

Generally,the different time complexities that we come across are linear[ O(n) ], quadratic [O[n^2) ], cubic [O(n^3)] ,logarithmic (O(lg n)) and exponential [O(2^n)]. Here the "lg n" means "log  n to the base 2".It is evident that:
Time Complexities of different functions
O(lg n) < O(n) < O(n lg n) < O(n^2) <O(n^3) <O(2^n) for all n>1.

What is Complexity??

Some algorithms are more efficient than others.So, we always like to choose that algorithm which is better;but how to decide which is better.Therefore we have units for comparing algorithms.

The complexity of an algorithm is that which determines the efficiency of the algorithm in terms of the input data which is to be processed.There are two main types of complexities ; namely time complexity and space complexity.As the name suggests time complexity distinguishes on the basis of the run-time and space complexity differentiates algorithms on the basis of the amount of memory space the algorithm requires.

Although there are other complexities as well like communication complexity(amount of communication),circuit complexity(the amount of gates used) but we are not much bothered about these.

Merge Sort(Algorithm)

The merge sort is another sorting technique that uses the divide and conquer approach.i.e. it breaks the original problem into sub-problems similar to the original problem but smaller in size;then it solves each subproblem recursively ;and then combines all the solved sub-problems.Hence the merge sort follows the following procedure:
Divide-divide the n-element array into sub-sequence of n/2 elements each.
Conquer-Sort the two sub-sequence recursively.
Combine-Merge the two sorted sub-sequences.
When the array is of length 1,it is already sorted.The main step is the combine step.So first we will deal with it.
For merging we call a method Merge(A,p,q,r) ,where A is the array ,p,q,r are the indices of array such that p<=q<r.Here we assume that the sub-arrays A[p..q] and A[q+1..r] are already sorted.You can imagine it as a couple of piles of sorted cards (smallest on top).We wish to merge both of them.So we take up the topmost card from each pile ,find the smaller out of them and place it to the third pile(the output pile).Like this we keep on comparing the topmost cards from each pile,and place the smaller in the output pile until one of the piles is empty , after which we can simply place the other pile in the output pile.We can check this condition(of the pile being empty) by placing am imaginary card of value infinte(called the sentinel card) at the end of both the piles.We use infinity because on reaching it on one of the piles,the other pile will only have cards smaller than it(except the sentinel card of that pile).Here is the pseudocode of the merge procedure

Merge(A,p,q,r)
n1=q-p+1   //n1 is the size of the first pile
n2=r-q        //n2 is the size of the second pile
create arrays Left[1....n1+1] and Right[1.....n2+1]
for i=1 to n1
      L[i]=A[p+i-1]
for j=1 to n2
      R[j]=A[q+j]
L[n1+1]=infinity
R[n2+1]=infinity
i=j=1
for k=p to r
    if L[i]<=R[j]
         A[k]=L[i]
         i=i+1
    else
         A[k]=R[j]
         j=j+1
Merge Sort



 Now that we have seen how the merge procedure works,it is simple to divide the array and sort each sub-division.the method merge-sort(A,p,r) takes the initial array A from indices p to r and recursively divides it and sends the sub -arrays to the merge procedure.It should be noted that if p>=r then the array has at most 1 element which is already sorted.Hence the procedure lokks like this:

Merge_Sort(A,p,r)
if p<r
      q=(p+r)/2    //division into two sub-arrays
       Merge_Sort(A,p,q)
       Merge_Sort(A,q+1,r)
       Merge(A,p,q,r)

To sort the entire array the initial call should be Merge_Sort(A,1,A.length)
The time complexity of the merge sort in worst case is Theta(n lg(n)).You can know more about time complexities in the Time Complexities page.

27/07/2011

Insertion Sort

#include<stdio.h>
#include<conio.h>
void main( )
{
int a[50];
int i,j,key,n;
printf("Enter the no. of elements");
scanf("%d",&n);
printf("Now enter the elements");
for(i=0;i<n;i++)
scanf("%d",&a[i]);

for(j=1;j<n;j++)
{
key=a[j];
i=j-1;
while(i>=0 &&a[i]>key)
{
a[i+1]=a[i];
i=i-1;
}
a[i+1]=key;
}
for(i=0;i<n;i++)
printf("%d ",a[i]);
getch();
}

//*Note:Remember we are starting for loop from 1 because array indices start from 0

Insertion Sort

In JAVA
class insertion
{
public static int[] main(int a[])
{
int i,j,key,n;
n=a.length;
for(j=1;j=0 &&a[i]>key)
{
a[i+1]=a[i];
i=i-1;
}
a[i+1]=key;
}
return a;
}
}
//*Note that we are starting for loop from 1 because array indices start from 0

Insertion Sort

Guys,We all have studied the bubble and selection sorting in our school.This is another type of sorting..

Insertion sort is a n effective algorithm for small set of data.The idea comes from this......When we play a game of cards we usually sort it.We place all the cards face down on the table and start with an empty left hand.Then we pick up one card from the table and place it in our left hand at its correct position.Like this we go on picking up cards from table and placing them in the correct positions in the left hand until all cards are picked up.Every time we pick up a card the ones in our hand are already sorted.
Hence the pseudocode for this algorithm goes like this.....

Insertion-sort(a) //where a is the array
for j=2 to a.length //because the first card is already sorted
key=a[j]
//insert a[j] in the sorted sequence a[1...j-1]
i=j-1
while i>0 and a[i]>key
a[i+1]=a[i]
i=i-1
a[i+1]=key

The worst case time complexity is O(n^2)