What’s in your Knapsack?

Authors : Amandeep, Drishti, Srinidhi

Hola geeks!!

According to a proverb, it is said that, “If you have figs in your knapsack,everyone will want to be your friend”. But did you know the best algorithm to carry maximum figs in your knapsack? Here’s the post which will help you to!

Fractional Knapsack problem statement

We are given profit and weights of ‘N’ items. We are required to put them in the knapsack having capacity ‘W’ in such a way that it yields maximum total profit. Unlike 0-1 knapsack where we could either take an item or leave it, in fractional knapsack, we can take fractions of an item, thus maximizing our total profit.

Consider the given problem:

Objectsobject 1object 2object 3
Profit252415
Weight181510

Capacity(W)=20

In this problem, 3 cases occur:

1.Greedy about Profit

In this case, we arrange the objects in decreasing order of their profits. Refer the table:

Objectsobject 1object 2object 3
Profit252415
Weight181510
  • Soln: 25+2/15 *24 = 28.2

Explanation: As we are greedy about profit,we take object 1 with highest profit and put it in knapsack. Now we are left with capacity 2 (20-18). Hence,we take next highest profit object 2 and take two fractions of it. Thus ,our answer comes out to be 28.2.

2.Greedy about Weight

In this case,we arrange the objects in increasing order of their weights and try to take the minimum weights first. Refer the table:

Objectsobject 3object 2object 1
Profit152425
Weights101518

Soln: 15+10/15*24 = 31

Explanation: As we are greedy about weights,we try to take the minimum weight objects first. Hence, we take profit of object 3 = 15 and now we are left with capacity 10 (20-10). Now we take ten fractions of object 2 making our profit =31.

3.Greedy about Both(optimal solution)

This case gives us the optimal solution for our problem. In this case,we arrange the objects in decreasing order of profit/weight ratio.Refer the table:

Objectsobject 2object 3object 1
Profit241525
Weights151018
Profit/Weight1.61.51.3

Soln: 24+5/10*15 = 31.5

Explanation: As we have already arranged the objects according to their Profit/Weight ratio, we choose object 2 with highest P/W ratio and now we are left with capacity 5(20-15). So,we take five fractions of object 3 and the overall profit comes out to be 31.5 which is the highest among all the 3 cases.

Algorithm(optimal Solution)

1.for i=1 to N
  1.1 Calculate Profit/Weight Ratio
2.Sort the objects in decreasing order of Profit/Weight Ratio
3.for i=1 to N
   3.1 if W>0 && w<=W
       3.1.1 W = W - w
       3.1.2 P = P + P(ith)
   3.2 else break 
   3.3 if W>0
      3.3.1 P = P + P(ith)*(W/w)
4.End

Do try analyzing these algorithm, because practice makes perfect! 

For any doubts or ambiguities, mail us at everythinggeeky.101@gmail.com, or comment below! 

Don’t forget to Like, Comment and Share with your friends.

Stay tuned for more interesting and geeky information, right here!

See you next week!

Cheers! 🙂

Team Bitxorbytes

Follow us on instagram:  @bitxorbytes

Search yourself!

Hello geeks! It has been a long time since our last post, so here’s another very easy and interesting topic to enjoy, ‘Searching’!

Linear Search

Linear search also called as sequential search, is the simplest way to search an element from a given list. It is this search technique which you use almost everywhere in real life.

According to this concept, an item(key) is searched by comparing each given item with the key until it is either found, or the list of items is exhausted.

Algorithm

1.	Start
2.	Set i to 0
3.	 If i < number of elements in array
3.1	Check if array[i] equal to key
3.1.1	Print: element present in array at index (i+1)
3.2	Else
3.2.1	Set i to i+1
3.2.2	Go to step 3
4.	Else return -1
5.	End 

Explanation

In the beginning, a variable i is taken for iteration purpose and is set to 0.  Key is the element to be searched. Then the element at index ‘i’ is compared to key.n If the 2 are equal, then the index and the element can be printed. If not, the value of ‘i’ is iterated. The above statements are only executed when ‘i’ is less than the total number of elements in array.

Lets understand by example:-

Uses

In real life, linear search can be used to search a word in the dictionary or find a book in the library. Also, linear search can be used to search an element in a given small list.

Time and space complexity

Worst-case-performanceO(N)
Best-case-performanceO(1)
Average-case-performanceO(N)
Worst-case-space complexityO(1)

Advantages

  1. Simple and easy to understand
  2. Easy to implement
  3. If the element to be searched is the first element, then you need not search the entire list!
  4. Works on both sorted and unsorted arrays

Disadvantages

1. Not suitable for lists of large size

2. It is a slow searching algorithm

3.  If the element to be searched is the last element, then this searching technique becomes inefficient

Binary search

Binary search  is a fast searching algorithm that  works on the principle of “Divide and Conquer”. However the necessary condition here is that the list of elements should be in sorted order(either ascending or descending). With time complexity of O(log n) it is far more better than linear/sequential search.

Algorithm


int Binary_search (array,s,e,x)       //function representation having integer return type
1.Start
2.Till e>s
     2.1 Declare mid = s + (e-s/2)
     2.2 Check if array[mid] equals x
           2.2.1 return mid 
    2.3 if(arr[mid] is less than x)
          2.3.1  s = mid +1
2.4 else e = mid - 1 
2.5 else return -1
3.End


** s stands for starting index, e stands for ending index, x is the value to be searched

Explanation

In this algorithm(above), we pass the following as parameters: the array, starting index, ending index,and the value to be searched(key). The function returns the index at which the key is present. As per the principle of binary search, the key value to be searched is compared with the middle element. Consequently, the array/list of elements is divided into 2 parts. 4 cases arise:-

  1. The middle element is itself the key, hence value of index  is returned.

2. The key value is less than the middle element, we reduce our workspace from the whole array to the first half of the array. The element is searched for in the sub-array by using the loop in the function from ‘s’ till ‘e=mid-1’.

3. The key value is greater than the middle element , we reduce our workspace from the whole array to the second half of array. The element is searched for in the sub-array, by using the loop in the function from ‘s= mid+1’ till ‘e’.

4.The element does not exist in the array , ‘-1’ s returned, signifying that the element is not found in the array.

Lets understand by example:-

Time and space complexity

Worst-case-performance O(log N)
Best-case-performance O(1)
Average-case-performance O(log N)
Worst-case-space complexity O(1)

Time complexity

You must wondering how binary search’s complexity is O(log N). Lets determine this!!

@bitxorbytes

Hence the searching is done with time complexity O(log N).

Use

It’s the most efficient search technique in terms of iterative approach.

Advantage

Its more efficient than linear search since , linear search takes, on average N/2 comparisons (where N is the number of elements in the array), and worst case N comparisons. Binary search takes an average and worst-case log 2 (N)comparisons. So for a million elements, linear search would take an average of 500,000 comparisons, whereas binary search would take 20.

Disadvantages

1.Its not as simple as linear search.

2.Its biggest disadvantage is that only list which is sorted can be allowed , it fails on unsorted list.

3.Also, binary search is not useful if the data structure we use does not allow random access. Eg. In case of array,its easy to jump to the middle element but , say in linked list , binary search may not be useful (may give wrong results).

Do try analyzing these algorithms and their time complexities, because practice makes perfect! and refer to our time complexity post for help

For any doubts or ambiguities, mail us at everythinggeeky.101@gmail.com, or comment below! 

Don’t forget to Like, Comment and Share with your friends.

Stay tuned for more interesting and geeky information, right here!

See you next week!

Cheers!

Team Bitxorbytes

Follow us on instagram:  @bitxorbytes

Image Credits: arcadiawindber.com/searching-for-something-to-do

Heap your sheep!

Heap concepts and Heap sort

A heap is special tree-based data structure, with the tree being a Complete binary tree. (All levels are completely filled). It is a special, balanced binary-tree data structure where root node is compared with its children, and arranged according to the type of heap.

Binary Tree

A binary tree is a tree data structure which has at most 2 children (i.e. 0, 1, 2 only), referred to as left and right children.

@bitxorbytes

Complete binary tree.

A Binary Tree is complete Binary Tree if all levels are completely filled except possibly the last level and the last level has all keys as left as possible .

@bitxorbytes

Heaps are of 2 types :-

1.Max Heap : Max heap is that heap which has all its parent nodes value greater than its children nodes.It should be recursively true for all sub-trees in that Binary Tree.

@bitxorbytes

2.Min Heap : Min Heap is that heap which has all its parent nodes value lesser than its children nodes.It should be recursively true for all sub-trees in that Binary Tree.

@bitxorbytes
Heapify !!!!

Heapify is the process of converting a binary tree into a Heap data structure, i.e. to convert it into either a max heap or min heap.

Algorithm

1.Start
2.Take as input array,size of array, i (index)
3.Assign values 
    3.1 largest = i
    3.2 left = 2 * i + 1  (left child)
    3.3 right = 2 * i + 2 (right child)
4. if  left< n and array[l] >arr[largest] is true
    4.1 Assign largest = left

5.if  right< n and array[r] >arr[largest] is true
    5.1 Assign largest = right
6. if largest != i
     6.1 swap array[i] and array[largest]
     6.2 call heapify function again with parameters array, n ,largest
7.End

Explanation

The heapify function coverts a binary tree to max/min heap.We create a MAX heap using the above algorithm.

We take as parameters array,its size and index. We initialize a variable with index. The left and right children are assigned their values in terms of index. There may occur 3 conditions:-

  1. left < sizeof(array) and array[left] > array[largest], i.e. the left child’s value is greater than its parent ,we assign largest = left child
  2. right < sizeof(array) and array[right] > array[largest] , ie the right child’s value is greater than its parent ,we assign largest = right child
  3. if largest != index ,this means our tree is still not heap ,we swap array[i] with array[largest] and call heapify function again passing array,sizeof(array),largest as parameters.

Heap sort

Heap sort is sorting technique which uses the concepts of heap to sort elements. It’s very similar to selection sort where we place the largest element at the end.

Algorithm

1.Start
2.Make max heap 
3.Swap the root with the minimum element of the heap
4.Call heapify function 
5.Repeat the steps 2 to 4 till elements are sorted
6.End

Lets understand through an example:-

Max heap is created
@bitxorbytes
root node and minimum element are swapped,heapify function is called
@bitxorbytes

Heapify function converts this tree to max heap again
@bitxorbytes
Heapify function converts this tree to max heap again
@bitxorbytes
root node and minimum element are swapped,heapify function is called
@bitxorbytes

Heapify function converts this tree to max heap again ,
root node and minimum element are swapped,heapify function is called
@bitxorbytes
root node and minimum element are swapped,heapify function is called
@bitxorbytes

Explanation

In heap sort , we first create max-heap.Now the root element(array[0]) is the largest element of the present heap.We swap this root element with the least(last) value of the heap array .Now we call heapify function and exclude the last element as its already at its correct position ,then we decrease the size of heap by 1. We repeat the above steps till all the elements are in their correct positions.

Time and Space Comlexity

Worst-case-complexityO( N log N)
Best-case-complexity O( N log N)
Average-case-complexity O( N log N)
Worst-case-space-complexity O(1)

Explanation

In this sort,we swap the root element and the last element and heapify the root element.Hence,for each element,this will take O(log N ) and since we repeat it N times ,the time complexity of heap sort is O(N log N).

Hence in simple words, Building a heap takes O(N) and each call to heapify function makes O(log N) calls. Thus complexity is O(N) * O(log N) = O( N log N) .

Advantages

Its main advantage is its efficiency : Execution time complexity O( N log N) and space complexity O(1).

Disadvantage

It is slower than merge and quick sort.


How about an awesome summary on sorting in the next post? Stay tuned!

For any doubts or ambiguities, mail us at everythinggeeky.101@gmail.com, or comment below! 

Don’t forget to Like, Comment and Share with your friends.

Stay Tuned for Everything Geeky.

See you next week!

Cheers!

Team Bitxorbytes

Follow us on instagram:  @bitxorbytes

http://clipart-library.com/baby-sheep-cliparts.html

Who’s counting?

Count Sort and Radix Sort

Hello Geeks! How have you been? Together,we have completed bubble sort, insertion sort, selection sort, merge sort and quick sort!

Whoa! That’s impressive, right?

Today, we shall discuss a couple of more sorting techniques! Counting sort and Radix sort. These are very, very intriguing and captivating. So, read on!

Count sort

Counting sort is a table sorting technique which is usually used along with radix sort. In this technique, the sorting process is based on the number of occurrences of an element in an array.

Explanation

The best explanation for count sort is possibly through sketches and images. These two images will tell you a thousand words on counting sort! So, no perplexing and puzzling words, and only crystal clear concepts!

Understanding Counting sort:

Once we get this auxiliary table, then we proceed as:

Getting the complete sorted array!

Use

Mostly, counting sort is used along with radix sort to sort a given list of bounded integers. But it can also be used individually, if the range of numbers is small.

Time and space complexity

Worst-case-performanceO(N+ number of possible values)
Best-case-performanceO(N+number of possible values)
Average-case-performanceO(N+number of possible values)
Worst-case-space complexityO(N+number of possible values)  

Advantages

  1. It is a stable sorting algorithm, meaning, it preserves the existing order of equal keys elements
  2. Takes linear time for operation

Disadvantages

1. Needs more working memory

2.   Implementation is a bit more difficult than other sorting techniques

Radix Sort

Radix sort is a linear-time sorting algorithm. It is used to sort elements having larger values.

The idea behind Radix Sort is to sort elements ‘bit-by-bit’, that is sort elements digit by digit starting from least significant digit(LSB) to most significant digit (MSB). Radix sort uses counting sort as a sub-sort.

Understanding Radix sort:

Find maximum number of digits in largest element.

Obtain the required bit (LSB to MSB) using modulus operator(%).

Sort the elements based on their individual bits using count sort.

Repeat for consecutive bits till MSB have been sorted.

Let us understand with an example:

Explanation

The least significant bit of each element is obtained and compared. Elements are sorted by their LSB, using COUNT SORT

Then, the consecutive bits are obtained and compared, and the elemens are sorted by order of their first 2 bits (from lsb).

Depending on the maximum number of digits in the largest elements, the consecutive are compared and sorted.

Finally, the Most significant bit is obtained and sorted using count sort. Now the whole array is in the sorted order.

A point to be noted here is that radix sort works because the order of the elements are taken into account and preserved in each step. This is to ensure that elements are sorted completely, with respect to all their bits.

Time and space complexity

Worst-case-performanceO( n2 )
Best-case-performanceO(nk)
Average-case-performance O(nk)
Worst-case-space complexity O(n+k)
Complexity Analysis of Radix Sort
*k is the length of the keys

Uses

It is used to create a ‘suffix array’ using the skew DC3 algorithm (Kärkkäinen-Sanders-Burkhardt). 

Advantages

  1. Radix Sort is faster for sorting shorter elements.
  2. It is stable because it preserves existing order of equals keys.

Disadvantages

It is not efficient on very long elements because the total sorting time is proportional to length of element.

For any doubts or ambiguities, mail us at everythinggeeky.101@gmail.com, or comment below! 

Don’t forget to Like, Comment and Share with your friends.

Stay Tuned for Everything Geeky.

See you next week!

Cheers!

Team Bitxorbytes

Follow us on instagram:  @bitxorbytes

Cover Image Credits: https://alphablots.com/product/typographic-birthday-cards-pack-of-six/

Sorting logged!

Hello Geeks! In the previous post we had discussed the three basic sorting techniques: bubble sort, selection sort and insertion sort. In today’s post we shall discuss two much more efficient sorting techniques: “Merge sort” and “Quick sort”. So lets get started!

Merge Sort

Merge sort is a simple divide-and-conquer algorithm, which involves breaking down the array until each sub list consists of only one element of the original array. Now, merging of such sub lists becomes very trivial.  Each sub list is merged such that, the combined list is in sorted order.

This is the most popular example of divide-and conquer algorithm, about which we discussed in the earlier post.

Algorithm (MergeSort)

  1. Start
  2. If start< end
    1. mid=(start+end)/2
    1. merge_sort(array, start, mid)
    1. merge_sort(array,mid+1, end)
    1. merge (array, start,mid, end)
  3. End

Algorithm (Merge)

 void merge(int array[ ] , int start, int mid, int end) {
1. Store int x = start , z = mid+1, k=0

2. for i = start till i <= end do

    2.1 if( array[x] < array[z])   
        2.1.1 array[k] = array[x]
        2.1.2 k++ and x++

    2.2 else if( array[x] > array[z]) 
       2.2.1 array[k] = array[z]
       2.2.2 k++ and z++

    2.3 if(x > mid)    
       2.3.1 array[k] = array[z]
       2.3.2 k++ and z++  

    2.4 else if ( z > end)
        2.4.1 array[k] = array[x]
        2.4.2 k++ and x++

    2.5 i++
 
 3. for  x=0 till  x< k
    3.1 array[ start++ ] = array[ x ] 
    3.2 x++                       
  

Explanation

Let the array have indexes from start to end. And let mid be the central or middle index. So now the array [start to end] can be divided into array [start to mid] and array.

Once divided into two halves, the halves are sorted individually. Then they are combined into one sorted array.

In the divide step, the array or its part is divided into two equal parts. In conqour, the two sub arrays are sorted. And in the combine step, both these are merged such that its combination gives a sorted array. Merge sort uses recursion, meaning, the function calling itself.

Let us understand with an example:

Now how is merging actually done? Watch this!

Merging of two sorted arrays into one sorted array

Use

Many search engines use merge sort, to compare data fetched from other websites.

Time and space complexity

Worst-case-performanceO(N log(N))
Best-case-performanceO(N log(N))
Average-case-performanceO(N log(N))
Worst-case-space complexityO(N)  

Advantages

  1. Has one of the best time complexity
  2. Well suited for a large collection of data

Disadvantages

1.Takes up more space in memory for the sub lists


Quick sort

Quick sort is a divide and conquer algorithm, in which an array is divided into parts based on a pivot element and later merged in a sorted manner.

A pivot element is chosen, based on which the array will be divided. The pivot may be the first element, the last element, middle element or any other suitable, element, as per choice.

All elements of the array are compared to the pivot and the position of the pivot element is determined.

The array thus splits into two parts: One containing all elements lesser than the pivot and the Second containing all elements greater then the pivot element.

The above procedure is applied to both the arrays and consequently to all the sub-arrays formed, until each array contains 1 element. Now, the sub-arrays are clubbed together thus forming the sorted array.

QuickSort algorithm

If beg< end do the following steps

  1. Partition the array based on chosen pivot element. (Algorithm given below)
  2. Apply quicksort on left subarray as quicksort(a, beg, p-1)
  3. Apply quicksort on right subarray as quicksort(a, p+1, end)

Partitioning algorithm

  1. Choose the pivot element (here, last element).
  2. Take i and j pointing to the beginning and end of the array to be sorted.
  3. While a[i]< pivot, increment i.
  4. While a[j]>=pivot, decrement j.
  5. If 5 is false and 6 is false, swap a[i] and a[j].
  6. If j becomes less than i, swap j with pivot, and return.

Let us understand with an example.

Explanation

The first image explains the partitioning of an array into sub-arrays.

‘i’ is incremented and ‘j’ is decremented while a[i] is lesser than pivot and a[j] is greater than the pivot element. Else, i and j are swapped. When all elements are analysed and j becomes less then i, the array is partitioned into 2 subarrays (having unequal length).

Quick sort algorithm is applied on both these subarrays recursively, thus first dividing them (based on the partitioning algorithm) into subarrays of length 1, and finally merging them in sorted order.

The second image depicts the quicksort algorithm, i.e. division and sorting then merging(conquering) of the subarrays.

Time Complexity Analysis

Worst-case-performanceO(N2)
Best-case-performanceO(N log(N))
Average-case-performanceO(N log(N))
Worst-case-space complexityO(log(N))  
Time complexity analysis for quick sort

Advantages

  1. Quick sort is an internal sorting, no auxiliary space is needed for sorting
  2. Well suited for smaller collection of data
  3. It is one of the fastest sorting algorithms.

Disadvantages

  1. Does not work well for large datasets.
  2. Time complexity increases significantly for larger number of comparisons.
  3. It can be unstable.

Uses

Generally used in commercial applications.

Comparison of Merge and Quick sort

Both merge sort and quick sort are based on the Divide-Conquer-Combine method, about which we had discussed in the previous post.

The main difference, if you observe closely, is that in the Divide step, Merge sort divides the list into two equal sub lists. On the other hand, Quick sort does not guarantee that the division will be equal. It is completely based on the pivot at that point.

In quick sort, sorting is done internally, whereas in merge sort, sorting is done separatly/ externally, using auxiliary memory space.

Do try to analyse the time complexities for the various sorting algorithms. And don’t forget to apply them in your everyday life!

We will be discussing some more interesting sorting techniques in the next post, so stay tuned!

For any doubts or ambiguities, mail us at everythinggeeky.101@gmail.com, or comment below.

Don’t forget to Like, Comment and Share with your friends.

Stay Tuned for Everything Geeky.

See you next week!

Cheers!

Team Bitxorbytes

Follow us on instagram:  @bitxorbytes

Cover Image credits: https://www.coindesk.com/hyperledger-on-the-verge-of-merging-blockchain-code-from-ibm-digital-asset

Divide and Conquer

“Divide each difficulty into as many parts as is feasible and necessary to resolve it.”

-Rene Descarts


This quote aptly describes what is the logic and thinking behind divide and conquer !!
So join us in learning this very important concept, which will meet you again in the next post also!

Divide and Conquer is an algorithm design paradigm(generic model) that is based on Multi-branched Recursion(the problem is divided into sub problems and solved using recursion till base case is not hit).Typically, a divide and conquer algorithm works on these 3 steps:-

1.Divide  : The given problem is divided into sub-problems.

2.Conquer :  These problems are recursively solved.

3.Combine : Then the answers are combined as appropriate.

Hence , divide-and-conquer algorithm works by recursively decomposing a problem into 2 or more sub-problems of alike type, till they become simple. Once they are simplified and now combined to give the optimal solution to the question.

Let’s understand through a visual:-

@bitxorbytes

Let’s understand through an example using merge sort:-

Merge Sort
@bitxorbytes

Explanation

From the figure we can see that the problem (unsorted array) is divided into sub problems which are further divided into sub problems till the base case is not hit( single element in this case).Then the problem is conquered (sorted in order) and the results are combined to give the final solution (sorted array).

Thus,divide and conquer algorithm is explained.

Uses

The following algorithms are based on divide-and-conquer algorithm design paradigm −

  • Merge Sort
  • Quick Sort
  • Binary Search
  • Closest pair (points)
  • Karatsuba algorithm for fast multiplication
  • Strassen’s Matrix Multiplication
  • Cooley–Tukey Fast Fourier Transform (FFT) algorithm

Advantages

Solving difficult problems

If a problem, can be decomposed into sub-problems , then solving them and combining the answers.

Algorithm efficiency

Divide and conquer algorithm is often used to discover /test for the efficiency of an algorithm.

Parallelism

Divide-and-conquer algorithms are naturally adapted for execution in multi-processor machines.

Memory access

Divide-and-conquer algorithms naturally tend to make efficient use of memory cache. The reason is that once a sub-problem is small enough, it and all its sub-problems can, in principle, be solved within the cache, without accessing the slower main memory. 

Disadvantage

Recursion

Since divide and conquer uses recursion, it may lead program to be slow and if even a small error occurs ,it may lead to infinite loop.

Explicit stack

Usage of explicit stacks may make use of extra space,hence this leads to implementation issues.

Choosing the base cases

Base cases must be chosen wisely ,otherwise it may lead to infinite loop.

Sharing repeated sub-problems

There can be cases  where the problem when broken down results in same sub-problems which may increases the solving time and extra space may be consumed.


We will be discussing some important algorithm based on this concept further in our blog, so stay tuned!

For any doubts or ambiguities, mail us at everythinggeeky.101@gmail.com 

Don’t forget to Like, Comment and Share with your friends.

Stay Tuned for Everything Geeky.

See you next week!

Cheers!

Team Bitxorbytes

Follow us on insta @bitxorbytes

information credits: https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm

Cover Image credits: http://www.chess.com/article/view/how-to-set-up-a-chess-game

Sorting Squared!

Ever wanted to sort out the books in your library at home? Or even the money in your purse? Then sorting comes your way! 

Sorting is a method of arranging items in a particular format. A list of items can be sorted in a number of ways in based upon various parameters, for instance, a numeric list can sorted in ascending or descending order. 

But a discussion on which is the fastest and the most efficient sorting method, can be interesting! 

So, let’s have an interesting discussion on the various methods of sorting. Thus, beginning the ‘Sorting series‘!

This blog is titled as ‘Sorting Squared‘! (And we leave it to you to guess why!)

Bubble sort

Bubble sort is also known by the name “sinking sort”.It comes under the class of sorting algorithm.

The basic approach here is to repeatedly compare consecutive elements in the list and swap them if they are placed incorrect.

AlgoRITHM

1.Start
2.Take input for n(value of array size )
3.Initialise array with size n
 3.1 Take input for array 
	
  4.Start for loop as    for (i = 0; i < n-1; i++)      
       	4.1Start for loop as  for (j = 0; j < n-i-1; j++)  
                   4.1.1 if (arr[j] > arr[j+1])  
                       4 .1.1.1  swap(&arr[j], &arr[j+1]);
   5.End   

Explanation :

Bubble sort is amongst the most simple sorting techniques with time complexity O(n^2).

In this sorting technique,the  we use two for loops :-

1. The 1st for-loop has ‘i’ starting from 0 goes till ‘n-1’ (array size -1).

 1.1 2nd for loop has j starting  from 0 goes till ‘n-i-1’ (last i elements are already in place)

Now we check if the arr[j] is greater than arr[j+1] then we swap them otherwise increment ‘i’ and ‘j’ .This is done till the whole array is sorted !

Let us understand with an example

TIME AND SPACE COMPLEXITY

Worst-case-performance O(N^2)
Best-case-performance O(N)
Average-case-performance O(N^2)
Worst-case-space complexity O(1) AUXILLARY

Use :-

1. Bubble sort is used in computer graphics since this sort can detect a very small error in partially sorted array and also fix it with linear complexity.

2. Also used in Polygon filling algorithm

Advantage:

Bubble sort has the ability to detect if elements of array are sorted efficiently is built into the algo .When the list is sorted ,the complexity of bubble sort is only O(n).On comparison,even the sorting algos with better average-case complexity perform the entire sorting  on the entire array ,hence are complex .

Disadvantage:

Bubble sort should be avoided in the case of large collections of elements .


Insertion sort

Insertion Sort is a sorting algorithm used to sort elements by putting them in their position one at a time.

An array of elements is divided in 2 parts, a sorted part containing a single element (initially) and the other part containing the rest of the array.

Each element of the unsorted part is compared with the sorted part and “inserted” into its proper position.

ALGORITHM

1.	Repeat the following steps from i=1 till
          1.1   i<n.
          1.2   Assign key= a[i]
	  1.3   Assign j=i-1
          1.4   Repeat the following steps till j>=0
	        1.4.1   If a[j]>key, assign a[j+1]=a[j]
	        1.4.2   Decrement ‘j’
	        End (j) loop
1.5   Assign a[j+1]=key
End (i) loop.

EXPLANATION

  1. Start a loop for i = 1 (second element of the array) to n (last element of the array).
    1. We assign the value of a[i] to ‘key’.
    2. We assign the value of i+1 to j.
  2. We compare the key for each iteration with each a[j], and shift a[j] to the right, while it is less than the key.
  3. Thus after ‘n’ iterations, each repeated for ‘n’ times, we get a sorted array!

Let us understand with an example!

TIME AND SPACE COMPLEXITY

Worst-case-performance O(N^2)
Best-case-performance O(N)
Average-case-performance O(N^2)
Worst-case-space complexityO(N) TOTAL

USE

1.When number of elements is less.

2.When the array is almost sorted ,and few swaps only are required .

ADVANTAGE

Simplicity of algorithm is the biggest advantage.Also being an “in-place” sorting technique , it uses minimum space. It’s advantageous to use this sorting technique, if we have a small list .

DISADVANTAGE

The main disadvantage is that its not suited for a large list of elements ,since with “n-squared” steps required for every “n” element to be sorted , it cannot handle a huge list. It does not perform that well in comparison with other sorting algorithms.


Selection sort

Selection sort is one of the simplest ‘in-place’ sorting algorithms. It  keeps selecting the appropriate element and places it in the sorted part of the list. Hence the name, selection sort!

ALGORITHM

1.Repeat the following steps from i=0 to i=( n-1) times:
2.Set i as min
    2.1	For each of the elements from index (i+1) to (n-1)
    2.2	if the current_element < a[min]
       2.2.1 Update min as current_element’s index
3.Swap a[i] and a[min]

EXPLANATION

Consider an array a with n elements. In every iteration the ith element is assumed to be the minimum, and hence,  i is set as min in the beginning.

Now every element from the (i+1)th element to the last element, is compared with a[min], updating min whenever a smaller element is found in the array.

In other words, the smallest element from the (i+1)th element to the last element is found by the method of comparison.

Once the smallest element and its index (min) is found, it is swapped with the element at the ith index. Then i is incremented and the same process takes place.

At the end of (n-1) such iterations the array becomes sorted!

Let us understand with an example!

TIME AND SPACE COMPLEXITY

Worst-case-performance O(N^2)
Best-case-performance O(N^2)
Average-case-performance O(N^2)
Worst-case-space complexity O(1) Auxillary

USES

One must always remember that selection sort is usually used whenever writes are expensive. Such as when sorting on flash memory, each write is expensive. Hence selection sort may be considered here.

ADVANTAGES

1.Simple implementation

2. Lesser number of writes

3. Takes up lesser memory

4. Best for small number of elements

DISADVANTAGE

Slower for a large number of elements


We will be discussing more sorting techniques further in our blog, so stay tuned!

For any doubts or ambiguities, mail us at everythinggeeky.101@gmail.com, or comment below! 

Don’t forget to Like, Comment and Share with your friends.

Stay Tuned for Everything Geeky.

See you next week!

Cheers!

Team Bitxorbytes

Follow us on instagram:  @bitxorbytes

Cover Image: http://clipart-library.com/sort-cliparts.html

Quiz Time!

Want to know how well you can print patterns? Or why time and space complexity is really that intriguing?

Then here is a set of tricky and intresting patterns, and complexity based questions to ponder upon!

Q(1-4) Write the looping conditions for these programs !

1.Pattern 1 :

    *
   **
  ***
 ****
*****
 ****
  ***
   **
    *

2.Pattern 2:



**********
****  ****
***    ***
**      **
*        *
**      **
***    ***
****  ****
**********

3. Pattern 3:

5 5 5 5 5 5 5 5 5
5 4 4 4 4 4 4 4 5
5 4 3 3 3 3 3 4 5
5 4 3 2 2 2 3 4 5
5 4 3 2 1 2 3 4 5
5 4 3 2 2 2 3 4 5
5 4 3 3 3 3 3 4 5
5 4 4 4 4 4 4 4 5
5 5 5 5 5 5 5 5 5 

4. Pattern 4:

      1
    1 2 1
  1 2 3 2 1 
1 2 3 4 3 2 1
  1 2 3 2 1
    1 2 1
      1

5. What is the time complexity of the following Program?

int i, j, l = 0; 
 for (i = n / 2; i <= n; i++) { 
     for (j = 2; j <= n; j = j * 2) { 
         l= l + n / 2; 
     } 
 }

6.Find the time complexity of the program !

int ans = 0, i = N; 
 while (i > 0) { 
     ans += i; 
     i /= 2; 
 }

7.Find the time complexity of the program !

void function() 
 {  int i, j; 
 for (i=1; i<=n; i++) 
     for (j=1; j<=log(i); j++) 
         printf("BitXorBytes"); 
 }

8.What is time complexity of following program?(c is a constant)

void powerFunction(){
    for (int i = 2; i <=n; i = pow(i, c)) { 
       print("*")
    }

//Here fun is sqrt or cuberoot or any other constant root
    for (int i = n; i > 1; i = fun(i)) { 
      print("*")
    }
 }

Solve as many as you can and dm us how many you could solve on instagram @bitxorbytes. You can also reach us at us at everythinggeeky.101@gmail.com.

For any doubts or ambiguities, mail us at everythinggeeky.101@gmail.com

Don’t forget to Like, Comment and Share with your friends.

Stay Tuned for Everything Geeky, We post every weekend!

Cheers! 🙂

Team Bitxorbytes

Cover Image Credits: https://wallpapersafari.com/question-mark-wallpaper/

Space and Time Complexity

Survival of the Fittest, is commonly used to denote progress measured by ‘strength’ or ‘success’. If we think about it, the phrase is applicable to programming too. We are often unaware of what goes on behind our program during its compilation. There may be more than one way to solve a problem, we need to optimize our code and choose the best algorithm to solve our problem. Which is why, while writing a program, we analyze its Space and Time complexity.
Time and space complexity depends on lots of things like hardware, operating system, processors, Compiler etc. However, we only consider the execution time of an algorithm.

Let us understand some basic terminologies.

Space Complexity of an algorithm/program is the total space taken by it, with respect to the input size. Space complexity includes both space used by input as well as the temporary space/run-time space required by the program.

Time Complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the input. It is expressed by the Order of Growth.
Eg. Time Complexity of loops running till n is generally O(n).

Order of Growth is how the time of execution of a program, depends on the length of the input. We use different notations to describe the behavior of a function.
Big O, Omega and Theta notation.

Big O notation gives the ‘worst case’ time complexity for a given algorithm. It represents a curve that bounds an algorithmic function from above.
Omega Notation gives the ‘best case’ time complexity for an algorithm. It represents a curve that bounds an algorithmic function from below.
Theta notation gives the ‘average’ time complexity for an algorithm.

For large values of ‘n’ (input size), the increasing order of Time Complexities is as follows:

O(1)< O(log(log n)) < O(√(log n)) < O(log n)< O(√n) < O(n)< O(n log(n)) < O(n√n)< O(n2)< O(n3)< O(nk)< O(nlog n)< O(2n)< O(n!)

*O(1) denotes ‘constant’ time-complexity.

The inverse relation between Time and Space Complexities is known as Space-Time TradeOff. When an algorithm is optimized to reduce the time complexity, it generally leads to an increase in its space complexity.

Let us understand how to find time complexity of a simple algorithm with an example.

sum = 0;
for (int i = 1; i <= n; i++ )
    for (int j = 1; j <= i*i; j++)
        if (j % i ==0)
            for (int k = 0; k < j; k++)
                sum++;

To calculate time complexity, measure how many times (at most) your code will run.
In this case, there are ‘n’ inputs.

  1. The outermost loop runs n times, disregarding the statements inside it.
  2. The next loop, runs from j=1 till j<i*i , i.e. is n2 times, for ‘n’ inputs, disregarding the outer loop.
  3. The ‘if’ block is an atomic statement, with complexity O(1) because it would not scale on n size.
  4. The innermost loop runs j times. Since, j has an upper bound of n2, the inner most loop runs ‘n2’ times, disregarding the outer loops.
  5. Finally, the innermost statement sum++ runs the number of times the complete nested loop runs, this would give us the Time complexity for the complete program. Thus, this statement runs for n* n2 * n2 times, i.e. for n5 times

Hence, the time-complexity of this loop is O( n5 ).

Let us take another example.

int count = 0;
for (int i = N; i > 0; i /= 2) 
    for (int j = 0; j < i; j++) 
        count++;

In the first look, it seems like the complexity is O(N∗logN).
N for the j-loop and logN for i- loop. However this is incorrect.

Consider the statement count++.
When i=N, j<N; it will run N times.
When i=N/2, j<N/2; it will run N/2 times.
When i=N/4, j<N/4; it will run N/4 times and so on.
Total number of times count++ will run will be given by the sum of these, (to consider ALL iteration, combinations of i and j).
N + N/2 + N/4 +…+ 1 = 2 ∗ N.
So the time complexity will be O(N).

To Sum up here are few examples for better understanding of time complexities:-

If there is a single iteration, and the iterative variable is increasing linearly then it’s O(n) e.g. for(i=0;i<n;i++) // This is O(n)

If the iterative variable is increasing geometrically, then it’s O(log n)
e.g for(k=1;k<n;k= k * 4) //O(log n)

If there is nested loop, where one has a complexity of O(n) and the other O(logn), then overall complexity is O(nlogn);
e.g. for(i=0;i<n;i++) // O(n)
{
for(j=1;j<n;j=j*3) // O(log n)
}

There are some other methods to find out the time complexity of different algorithms, which we will be discussing further in our blog, so stay tuned!

For any doubts or ambiguities, mail us at everythinggeeky.101@gmail.com

Don’t forget to Like, Comment and Share with your friends.

Stay Tuned for Everything Geeky. See you next week!

Cheers! 🙂

Team Bitxorbytes

Follow us on insta @bitxorbytes

cover image credits: https://www.csetutor.com/time-complexity-and-space-complexity-of-an-algorithm/ , https://courses.cs.washington.edu/courses/cse373/19sp/resources/math/asymptotic-analysis/

Patterns!

Hello World!

Welcome to BitXorBytes!

This is our first post in the series “Explaining Algorithms”. We start by discussing various interesting patterns!

Right-Angled Perimeter Triangle

*
*  *
*     *  
*        *             
* *  *  *
Algorithm: 
1: Start
2: Declare variable i, j rows;
3: Initialize variables
      i=1;
      j=1;
4: Read value of rows
5: Repeat the steps until i=rows
   5.1: Repeat the steps until j=i
          5.1.1 If i==j OR j=1 OR i==rows
                    Display "*"
                   Else
                    Display " "
         5.1.2 j++ 
   5.2:  Start new line
   5.3: i++
6: End

Explanation: Since we want to print only the perimeter of the triangle, one must first figure out where exactly should an ” * ” be printed.  On analysis, you would get to know that there are three cases which are the conditions for the perimeter:

1) When variable i is equal to variable j: This is the condition for the diagonal elements (hypotenuse) 2) When variable j=1: This condition gives the vertical line in the triangle (perpendicular) 3) When variable i is equal to the total number of rows: This condition gives the last row of a triangle (base)   In all other combinations of i and j, one would get the interior of the triangle.

   
Isosceles Perimeter Triangle

           *
         *    *
       *        *                    
     *   *    *   *  
Algorithm: 
1: Start
2: Declare variable i, j rows,space  ;
3: Initialize variables
      i=1;
      j=1;
      space=1;
4: Read value of rows
5: Repeat the steps until i=rows
   5.1: Repeat the steps until space=(rows-i)
          5.1.1 Display " "
          5.1.2 space++ 
   5.2: Repeat the steps until j=(2*rows -1)
          5.2.1 If i==rows OR j=1 OR j==(2*i-1)
                    Display "*"
                    Else
                    Display " "
          5.2.2 j++ 
   5.3:  Start new line
   5.4: i++
6: End

Explanation: Here,to print only the perimeter of the center aligned triangle, one has to see where ” * ” has to be printed.  So, first the cursor is set to the right place by using an additional variable called space.  After setting the cursor to the right place, there are three cases where a “*” must be printed: 1) When variable j is equal to (2*i-1): This is the condition for the right side perimeter of the triangle 2) When variable j=1: This condition gives the left side perimeter of the triangle  3) When variable i is equal to the total number of rows: This condition gives the last row of a triangle or the base   In all other combinations of i and j, one would get the interior of the triangle.

Numbers & Stars

1 * * * *
1 2 * * *
1 2 3 * *
1 2 3 4 *
1 2 3 4 5
Algorithm: 
1: Start
2: Declare variables i, j ,k ,rows;
3: Initialize variables
      i=1;
      j=1;
      k=rows;
4: Read value of rows
5: Start a for loop till  i<=rows 
  5.1: Start another for loop till j<=i
       5.1.1 Display j;
5.2 Start a for loop for k till k>= j, decrementing k by 1
         5.2.1 Display “*“;
5.3 Start a new line 
6: End

Explanation: Here we are required to print Number pattern and star pattern in a given orientation. Hence, we require 3 for loops 1) Till I != Number of rows (initially entered by user) 2) Till j<= I ,we display the number pattern through this loop 3)  Starting value of k from rows,till k>=j,and decrementing its value by 1,we display the “*” pattern. Hence the number and star pattern is printed.

Number-Mountain

1           1
1 2       2 1
1 2 3   3 2 1
1 2 3 4 3 2 1 
Algorithm: 
1: Start
2: Declare variables i, j, k, rows;
3: Initialize variables
      i=1;
      j=1;
      k=1;
4. Read value of rows
5: Start a for loop till  i<=rows
5.1: Start another for loop till j<=i
       5.1.1 Display k “ “ ;
        5.1.2 Increment k by 1;
5.2 Start a for loop till j<=2*n -1-2*i
         5.2.1 Display “ “;
5.3 Start another for loop till j<=i
	5.3.1 Decrement k by 1;
        5.3.2 if k == rows ,then continue the loop 
               Else Display k “ “;
5.4 Start a new line 
6: End

Explanation: Here we are required to print the pattern mountain according to the number of rows entered by the user. Hence, there will be 4 for loops for: 1) Till I != Number of rows (initially entered by user), 2)  Displaying Value of variable k  (Left side pattern) 3) Displaying space (condition is j<=2*n -1-2*I ) 4)  Displaying Value of variable k (Right side pattern)     4.1) Here there are 2 conditions 4.1.1) If k is equal to the number of rows (initially entered by user), then continue the loop       4.1.2) Else Display variable k (Right side pattern) Now the pattern mountain will be printed.

It was so much fun writing this post and brainstorming the algorithms for different patterns. Hope you enjoy reading them! And do create your own algorithms also. Happy Programming!

For any doubts or ambiguities, mail us at everythinggeeky.101@gmail.com

Don’t forget to Like, Comment and Share with your friends.

Stay Tuned for Everything Geeky!

Cheers! 🙂

Team Bitxorbytes

Image Credits: https://www.pinterest.com/pin/689121180456329700/

Design a site like this with WordPress.com
Get started