- Java 9 Data Structures and Algorithms
- Debasish Ray Chawdhuri
- 658字
- 2021-07-02 23:26:43
Arrays
If you are a Java programmer, you must have worked with arrays. Arrays are the basic storage mechanisms available for a sequence of data. The best thing about arrays is that the elements of an array are collocated sequentially and can be accessed completely and randomly with single instructions.
The traversal of an array element by an element is very simple. Since any element can be accessed randomly, you just keep incrementing an index and keep accessing the element at this index. The following code shows both traversal and random access in an array:
public static void printAllElements(int[] anIntArray){ for(int i=0;i<anIntArray.length;i++){ System.out.println(anIntArray[i]); } }
Insertion of elements in an array
All the elements in an array are stored in contiguous memory. This makes it possible to access any element in a constant amount of time. A program simply needs to compute the offset that corresponds to an index, and it reads the information directly. But this means they are also limited and have a fixed size. If you want to insert a new element into an array, you will need to create a new array with one more element and copy the entire data from the original data along with the new value. To avoid all this complexity, we will start with moving an existing element to a new position. What we are looking to do is to take an element out, shift all the elements up to the target position to make space in this position, and insert the value we extracted in the same place.

Figure 1: Insertion of an existing array element into a new location
The preceding figure explains what we mean by this operation. The thin black arrows show the movement of the element that is being reinserted, and the thick white arrow shows the shift of the elements of the array. In each case, the bottom figure shows the array after the reinsertion is done. Notice that the shifting is done either to the left or right, depending on what the start and end index are. Let's put this in code:
public static void insertElementAtIndex(int[] array, int startIndex, int targetIndex){ int value = array[startIndex]; if(startIndex==targetIndex){ return; }else if(startIndex < tarGetIndex){ for(int i=startIndex+1;i<=targetIndex;i++){ array[i-1]=array[i]; } array[targetIndex]=value; }else{ for(int i=startIndex-1;i>=targetIndex;i--){ array[i+1]=array[i]; } array[targetIndex]=value; } }
What would be the running time complexity of the preceding algorithm? For all our cases, we will only consider the worst case. When does an algorithm perform worst? To understand this, let's see what the most frequent operation in an algorithm is. It is of course the shift that happens in the loop. The number of shifts become maximum when startIndex
is at the beginning of the array and targetIndex
at the end or vice versa. This is when all but one element has to be shifted one by one. The running time in this case must be some constant times the number of elements of the array plus some other constant to account for the non-repeating operations. So it is T(n) = K(n-1)+C for some constants K and C, where n is the number of elements in the array and T(n) is the running time of the algorithm. This can be expressed as follows:
T(n) = K(n-1)+C = Kn + (C-K)
The following steps explain the expression:
- As per rule 1 of the definition of big O, T(n) = O(Kn + (C-K)).
- As per rule 3, T(n) = O(Kn).
- We know |-(C-K)| < |Kn + (C-K)| is true for sufficiently large n. Therefore, as per rule 3, since T(n) = O(Kn + (C-K)), it means T(n) = O(Kn + (C-K) + (-(C-K))), that is, T(n) = O(Kn).
- And, finally, as per rule 2, T(n) = O(n).
Now since the array is the major input in the algorithm, the size of the input is represented by n. So we will say, the running time of the algorithm is O(n), where n is the size of the input.
Insertion of a new element and the process of appending it
Now we move on to the process of insertion of a new element. Since arrays are fixed in size, insertion requires us to create a new array and copy all the earlier elements into it. The following figure explains the idea of an insertion made in a new array:

Figure 2: Insertion of a new element into an array
The following code does exactly that:
public static int [] insertExtraElementAtIndex(int[] array, int index, int value){ int [] newArray = new int[array.length+1];
First, you copy all the elements before the targeted position as they are in the original array:
for(int i=0;i<index;i++){ newArray[i] = array[i]; }
Then, the new value must be put in the correct position:
newArray[index]=value;
In the end, copy the rest of the elements in the array by shifting their position by one:
for(int i=index+1;i<newArray.length;i++){ newArray[i]=array[i-1]; } return newArray; }
When we have the code ready, appending it would mean just inserting it at the end, as shown in the following code:
public static int[] appendElement(int[] array, int value){ return insertExtraElementAtIndex(array, array.length, value); }
What is the running time complexity of the preceding algorithm? Well, no matter what we do, we must copy all the elements of the original array to the new array, and this is the operation in the loop. So the running time is T(n) = Kn + C for some constants K and C, and n is the size of the array, which is the size of the input. I leave it to you to verify the steps in order to figure out this: T(n) = O(n).
- WildFly:New Features
- Visual Basic .NET程序設計(第3版)
- C語言程序設計(第3版)
- 我的第一本算法書
- JavaScript by Example
- C#程序設計基礎:教程、實驗、習題
- 學Python也可以這么有趣
- Salesforce Reporting and Dashboards
- HTML5 APP開發從入門到精通(微課精編版)
- Vue.js 2 Web Development Projects
- Laravel Application Development Blueprints
- Java程序設計教程
- Keil Cx51 V7.0單片機高級語言編程與μVision2應用實踐
- 趣學數據結構
- Visual FoxPro程序設計實驗教程