Basic term on Linux Administration services

Q. What are the differences between a regular file and a directory.
A. A directory is marked with a different file type in its i-node entry and it is a file with a special organization. Specifically, it is a table consisting of file names and i-node numbers.


Q.
Where are the file names stored on a file system?
A.The actual file names are stored in the directory file.

Q. What is an i-node?
A. An i-node (short for index node) is a pointer to a data structure that contains the following information describing a file on the filesystem:
* File type (e.g., regular file, directory, symbolic link, character device).
* Owner (also referred to as the user ID or UID) for the file.
* Group (also referred to as the group ID or GID) for the file.
* Access permissions for three categories of user: owner (sometimes referred to as user), group, and other (the rest of the world).
* Three timestamps: time of last access to the file (shown by ls –lu), time of last modification of the file (the default time shown by ls –l), and time of last status change (last change to i-node information, shown by ls –lc). As on other UNIX implementations, it is notable that most Linux file systems don’t record the creation time of a file.
* Number of hard links to the file.
* Size of the file in bytes.
* Number of blocks actually allocated to the file, measured in units of 512-byte blocks. There may not be a simple correspondence between this number and the size of the file in bytes, since a file can contain holes, and thus require fewer allocated blocks than would be expected according to its nominal size in bytes.
* Pointers to the data blocks of the file.
I-nodes are identified numerically by their sequential location in the i-node table.
The i-node doesn’t contain a file name; it is only the mapping within a directory list that defines the name of a file.
I-node 1 is used to record bad blocks in the file system. The root directory (/) of a file system is always stored in i-node entry 2.
I-node numbers are unique only within a file system.

Q. What are hard and soft links?
A. The mapping within a directory list that defines the name of a file and it’s i-node number is called a link, or a hard link. One can create multiple names — in the same or in different directories — each of which refers to the same i-node.
Hard links have two limitations, both of which can be circumvented by the use of symbolic links:
* Because directory entries (hard links) refer to files using just an i-node number, and i-node numbers are unique only within a file system, a hard link must reside on the same file system as the file to which it refers.
* A hard link can’t be made to a directory. This prevents the creation of circular links, which would confuse many system programs.
A symbolic link is just a file containing the name of another file; Symbolic link refers to a file name, rather than an i-node number, it can be used to link to a file in a different file system.

Q. What is a Signal in Linux, and what signal is invoked when you use the kill command? What is the difference between kill and kill -9?
A. A signal is a limited form of inter-process communication used in Unix, Unix-like, and other POSIX-compliant operating systems. It is an asynchronous notification sent to a process or to a specific thread within the same process in order to notify it of an event that occurred. When a signal is sent, the operating system interrupts the target process’s normal flow of execution.

The difference between invoking kill with no signal specified (which uses SIGTERM, number 15) and kill -9 is that the latter tries to kill the process without consideration to open files and resources in use.

Q. Describe what happens when you run the rm command.
A. The rm command removes a filename from a directory list, decrements the link count of the corresponding i-node by 1, and, if the link count thereby falls to 0, deallocates the i-node and the data blocks to which it refers.

Q. What is a process?
A. A process is an instance of an executing program. When a program is executed, the kernel loads the code of the program into virtual memory, allocates space for program variables, and sets up kernel bookkeeping data structures to record various information (such as process ID, termination status, user IDs, and group IDs) about the process. From a kernel point of view, processes are the entities among which the kernel must share the various resources of the computer.

Q. What are the logically divided parts of a process?

A. A process is logically divided into the following parts, known as segments:

* Text: the read-only machine-language instructions of the program run by the process.
* Data: initialized/uninitialized global and static variables used by the program;
* Heap: an area from which memory (for variables) can be dynamically allocated at run time. The top end of the heap is called the program break;

* Stack: a piece of memory that grows and shrinks as functions are called and return and that is used to allocate storage for local variables and function call linkage information;

Q. What are the process states in Linux?
A. Running: Process is either running or ready to run

* Interruptible: a Blocked state of a process and waiting for an event or signal from another process
* Uninterpretable: a blocked state. Process waits for a hardware condition and cannot handle any signal
* Stopped: Process is stopped or halted and can be restarted by some other process
* Zombie: process terminated, but information is still there in the process table.

Q. How are threads different from processes?
A. Like processes, threads are a mechanism that permits an application to perform multiple tasks concurrently. A single process can contain multiple threads. All threads are independently executing the same program, and they all share the same global memory, including the initialized data, uninitialized data, and heap segments.
Sharing information between threads is easy and fast. It is just a matter of copying data into shared (global or heap) variables. However, in order to avoid the problems that can occur when multiple threads try to update the same information, we must employ some synchronization techniques.
Thread creation is faster than process creation—typically, ten times faster or better. On Linux, threads are implemented using the clone() system call.

Q. What is a Socket?
A. A Socket is a form of Interprocess Communication and Synchronization that can be used to transfer data from one process to another, either on the same host computer or on different hosts connected by a network; Network sockets are identified by source IP address source port and destination IP address and port.

Q. How do you debug a running process or a library that is being called?
A. strace -p PID

ltrace libraryfile

Q. How to see a memory map of a process, along with how much memory a process uses?

A. pmap -x PID

Q. You run chmod -x /bin/chmod, how do you make chmod executable again without copying it or restoring from backup?

A. On Linux, when you execute an ELF executable, the kernel does some mapping and then hands the rest of process setup off to ld.so(1), which is treated somewhat like a (hardware backed) interpreter for ELF files, much like /bin/sh interprets shell scripts, perl interprets perl scripts, etc. And just like you can invoke a shell script without the executable bit via ’/bin/sh your_script’, you can do:
/lib64/ld-linux-x86-64.so.2 /bin/chmod +x /bin/chmod

Q. Explain the TIME_WAIT state in a TCP connection, as displayed by netstat or ss.

A. A TCP connection is specified by the tuple (source IP, source port, destination IP, destination port). The reason why there is a TIME_WAIT state following session shutdown is because there may still be live packets out in the network on its way to you. If you were to re-create that same tuple and one of those packets show up, it would be treated as a valid packet for your connection (and probably cause an error due to sequencing).  So the TIME_WAIT time is generally set to double the packets maximum age. This value is the maximum age your packets will be allowed to get to before the network discards them. That guarantees that, before your allowed to create a connection with the same tuple, all the packets belonging to previous incarnations of that tuple will be dead. That generally dictates the minimum value you should use. The maximum packet age is dictated by network properties, an example being satellite lifetimes are higher than LAN lifetimes since the packets have much further to go.

Q. What is Huge Pages in Linux and what use is there for them?

A. Hugepages is a mechanism that allows the Linux kernel to utilize the multiple page size capabilities of modern hardware architectures. Linux uses pages as the basic unit of memory, where physical memory is partitioned and accessed using the basic page unit. The default page size is 4096 Bytes in the x86 architecture. Huge pages allows large amounts of memory to be utilized with a reduced overhead.
To check: cat /proc/sys/vm/nr_hugepages.
To set: echo 5 > /proc/sys/vm/nr_hugepages

Q. What is a Master boot Record and how do you back it up and restore it?

A. The MBR  is a 512 byte segment on the very first sector of your hard drive composed of three parts: 1) the boot code which is 446 bytes long, 2) the partiton table which is 64 bytes long, and 3) the boot code signature which is 2 bytes long.
To backup: dd if=/dev/sda of=/tmp/mbr.img_backup bs=512 count=1
To restore: dd if=/tmp/mbr.img of=/dev/sda bs=512 count=1

Q. You are using iSCSI or a virtual machine with attached block device. Due to high IO or network latencies the FS goes in read only mode from time to time. What can you do to increase the write time out on the block device?
A. To increase the write time out on a block device in real time use the sys fs:
echo 60 > /sys/block/sdk/device/timeout

Q. Your server is using a lot of cached memory. How do you free it up short of rebooting?

A. Kernels 2.6.16 and newer provide a mechanism to have the kernel drop the page cache and/or inode and dentry caches on command, which can help free up a lot of memory.
To free page cache, dentries and inodes: echo 3 > /proc/sys/vm/drop_caches.

Q. How do you pin a process to a specific CPU?
A. CPU affinity is a scheduler property that “bonds” a process to a given set of CPUs on the system. The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs. The scheduler attempts to keep processes on the same CPU as long as practical for  performance  reasons. To pin a new process to the first CPU run:
taskset -c 0 top
To pin an existing process to the second CPU run:
taskset -c 1 -p $(pgrep top)

Q. How do you track new concurrent connections?

A. Concurrent connections are the number of authenticated “handshakes” between a client and/or server during any given time before all communications have been disconnected whether by force or by refusal. You can run:
modprobe ip_conntrack
conntrack -E -e NEW

Q. What is SYN flood and how can you detect it and mitigate it?

A. A SYN flood is a form of denial-of-service attack in which an attacker sends a succession of SYN requests to a target’s system in an attempt to consume enough server resources to make the system unresponsive to legitimate traffic. Detection can be done by netstat or ss and filtering for SYN-RECV connection states. Mitigation can be done by null-routing the offending IP and enabling SYN cookies in the kernel, which allow the server to sends back the appropriate SYN+ACK response to the client but discards the SYN queue entry.
ss -a | grep SYN-RECV | awk ‘{print $4}’ | awk -F”:” ‘{print $1}’ | sort | uniq -c | sort -n netstat -antp | grep SYN_RECV|awk ‘{print $4}’|sort|uniq -c | sort -n

Q. You have a file with 2000 IP’s. How do you ping them all using bash in parallel?

A.  echo $(cat iplistfile) | xargs -n 1 -P0 ping -w 1 -c 1

Q. What command can you use to send unsolicited ARP updates to the neighboring servers’ caches.

A. arping -U -c 1 -I eth0 0.0.0.0 -s IP_ADDRESS

Q. What Linux utility can craft custom packets, like TCP SYN packets and send them to a remote host?

A. hping3 -S 192.168.1.1 -p 80 -i u1

Q. What is Memory Overcommit in Linux?

A. By default, Linux will allow processes to allocate more virtual memory than the system actually has, assuming that they won’t end up actually using it. When there’s more overcommited memory than the available physical and swap memory the OOM-killer picks some process to kill in order to recover memory. One reason Linux manages memory this way by default is to optimize memory usage on fork()’ed processes; fork() creates a full copy of the process space, but in this instance, with overcommitted memory, only pages which have been written to actually need to be allocated by the kernel.

Q. What is system load averag as displayed by uptime?

A. Load Average is the sum of the number of processes waiting in the run-queue plus the number currently executing.If there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilizing its processors perfectly for the last 60 seconds.

Q. How do you list all kernel modules that are compiled in or enabled?

A. You can execute:

cat /boot/config-$(uname -r)

Q. Kernel space Vs. User space – pros and cons.
A. The role of the operating system, in practice, is to provide programs with a consistent view of the computer’s hardware. In addition, the operating system must account for independent operation of programs and protection against unauthorized access to resources. This nontrivial task is possible only if the CPU enforces protection of system software from the applications.
Every modern processor is able to enforce this behavior. The chosen approach is to implement different operating modalities (or levels) in the CPU itself. The levels have different roles, and some operations are disallowed at the lower levels; program code can switch from one level to another only through a limited number of gates. Unix systems are designed to take advantage of this hardware feature, using two such levels. All current processors have at least two protection levels, and some, like the x86 family, have more levels; when several levels exist, the highest and lowest levels are used. Under Unix, the kernel executes in the highest level (also called supervisor mode), where everything is allowed, whereas applications execute in the lowest level (the so-called user mode), where the processor regulates direct access to hardware and unauthorized access to memory.
We usually refer to the execution modes as kernel space and user space. These terms encompass not only the different privilege levels inherent in the two modes, but also the fact that each mode can have its own memory mapping—its own address space—as well.
Unix transfers execution from user space to kernel space whenever an application issues a system call or is suspended by a hardware interrupt. Kernel code executing a system call is working in the context of a process—it operates on behalf of the calling process and is able to access data in the process’s address space. Code that handles interrupts, on the other hand, is asynchronous with respect to processes and is not related to any particular process.

Q. What is the difference between Active and Passive FTP sessions:
A.

Active FTP :
command channel : client port above1023 connects to server port 21
data channel: client port above 1023 is connected from server port 20
Passive FTP :
command channel: client port above 1023 connects to server port 21
data channel: client port above 1023 connects to server port above 1023
MySQL Questions:

Q. What are the two main MySQL storage engines, and how they differ?

A.  The two most popular storage engines in MySQL are InnoDB and MyISAM
InnoDB supports some newer features like transactions, row-level locking, foreign keys. It’s optimized for read/write high volume operations and high performance.
MyISAM is simpler and better optimized for read only operations. It has limited feature set as compared to InnoDB.

Q. What to consider when setting up master-to-master replication?

A. Duplicate indexes can be a problem, when clients make changes to the database on both mastesr at the same time.  To mitigate this configure both masters to use  auto_increment_increment and auto_increment_offset values.

Leave a Reply