The Linux kernel has supported NFS for as long as I can remember. All of the major distributions Redhat, CentOS, Fedora, Suse, Ubunut ship with NFS client and. Unix Toolbox. Unix Toolbox. This document is a collection of UnixLinuxBSD commands and tasks which are useful for IT work or for advanced users. This is a practical guide with concise explanations, however the reader is supposed to know what she is doing. Hardware Statistics Users Limits Runlevels root password Compile kernel Repair grub Misc. Running kernel and system information. Get the kernel version and BSD version. Full release info of any LSB distribution. Su. SE release Get Su. SE version. cat etcdebianversion Get Debian version. LM10/Magazine/Archive/2010/111/020-024_win7admin/images/FIGURE6.png' alt='Microsoft Services For Unix Nfs Ports' title='Microsoft Services For Unix Nfs Ports' />Use etcDISTR release with DISTR lsb Ubuntu, redhat, gentoo, mandrake, sun Solaris, and so on. See also etcissue. Show how long the system has been running load. Display the IP address of the host. Linux only. man hier Description of the file system hierarchy. Show system reboot history. Hardware Informations. Kernel detected hardware. Detected hardware and boot messages. Read BIOSLinux cat proccpuinfo CPU model. Hardware memory. grep Mem. Total procmeminfo Display the physical memory. Watch changeable interrupts continuously. Used and free memory m for MB. Configured devices. Show PCI devices. NFS vs. CIFS In the realm of computers, file systems and network protocols, two names often surface the NFS and the CIFS. These acronyms sound too. Show USB devices. Show a list of all devices with their properties. Show DMISMBIOS hw info from the BIOSFree. BSD sysctl hw. model CPU model. Gives a lot of hardware information. CPUs installed. sysctl vm Memory usage. Hardware memory. sysctl a grep mem Kernel memory settings and info. Configured devices. NFS-Setup12.JPG' alt='Microsoft Services For Unix Nfs Ports' title='Microsoft Services For Unix Nfs Ports' />Show PCI devices. Show USB devices. Show ATA devices. Show SCSI devices. Load, statistics and messages. The following commands are useful to find out what is going on on the system. Service Name and Transport Protocol Port. Note Service names and port numbers are used to distinguish between different services. Protocol microsoft. Netcat 1. 10 0 0 Netcat is a simple Unix utility which reads and writes data v. IO statistics 2 s intervals. BSD summary of system statistics 1 s intervals. BSD tcp connections try also ip. BSD active network connections. BSD network traffic through active interfaces. BSD CPU and and disk throughput. System V interprocess. Last 5. 00 kernelsyslog messages. System warnings messages see syslog. Users id Show the active user id with login and group. Show last logins on the system. Show who is logged on the system. Add group admin and user colin LinuxSolaris. Colin Barschel g admin m colin. G lt group lt user Add existing user to group Debian. A lt user lt group Add existing user to group Su. SE. userdel colin Delete user colin LinuxSolaris. Free. BSD add user joe interactive. Free. BSD delete user joe interactive. Use pw on Free. BSD. Add a new member to a group. Colin Barschel g admin m s bintcsh. Encrypted passwords are stored in etcshadow for Linux and Solaris and etcmaster. Free. BSD. If the master. To temporarily prevent logins system wide for all users but root use nologin. The message in nologin will be displayed might not work with ssh pre shared keys. Sorry no login now etcnologin Linux. Sorry no login now varrunnologin Free. BSDLimits. Some application require higher limits on open files and sockets like a proxy. The default limits are usually too low. Linux. Per shellscript. The shell limits are governed by ulimit. The status is checked. For example to change the open files limit from. This is only valid within the shell. The ulimit command can be used in a script to change the limits for the script only. Per userprocess. Login users and applications can be configured in etcsecuritylimits. For example. cat etcsecuritylimits. Limit user processes. Limit application open files. System wide. Kernel limits are set with sysctl. Permanent limits are set in etcsysctl. View all system limits. View max open files limit. Change max open files limit. Permanent entry in sysctl. How many file descriptors are in use. Free. BSDPer shellscript. Use the command limits in csh or tcsh or as in Linux, use ulimit in an sh or bash shell. Per userprocess. The default limits on login are set in etclogin. An unlimited value is still limited by the system maximal value. Kernel limits are also set with sysctl. Permanent limits are set in etcsysctl. The syntax is the same as Linux but the keys are different. View all system limits. XXXX maximum number of file descriptors. Permanent entry in etcsysctl. Typical values for Squid. TCP queue. Better for apachesendmail. How many file descriptors are in use. How many open sockets are in use. Default is 1. 02. See The Free. BSD handbook Chapter 1. And also Free. BSD performance tuninghttp serverfault. Solaris. The following values in etcsystem will increase the maximum file descriptors per proc. Hard limit on file descriptors for a single proc. Soft limit on file descriptors for a single proc. Runlevels. Linux. Once booted, the kernel starts init which then starts rc which starts all scripts belonging to a runlevel. The scripts are stored in etcinit. N. d with N the runlevel number. The default runlevel is configured in etcinittab. It is usually 3 or 5. The actual runlevel can be changed with init. For example to go from 3 to 5. Enters runlevel 5. Shutdown and halt. Single User mode also S2 Multi user without network. Multi user with network. Multi user with X6 Reboot. Use chkconfig to configure the programs that will be started at boot in a runlevel. List all init scripts. Report the status of sshd. Configure sshd for levels 3 and 5. Disable sshd for all runlevels. Debian and Debian based distributions like Ubuntu or Knoppix use the command update rc. Default is to start in 2,3,4 and 5 and shutdown in 0,1 and 6. Activate sshd with the default runlevels. With explicit arguments. Disable sshd for all runlevels. Shutdown and halt the system. Free. BSD. The BSD boot approach is different from the Sys. V, there are no runlevels. The final boot state single user, with or without X is configured in etcttys. All OS scripts are located in etcrc. The activation of the service is configured in etcrc. The default behavior is configured in etcdefaultsrc. The scripts responds at least to startstopstatus. Go into single user mode. Go back to multi user mode. Shutdown and halt the system. Reboot. The process init can also be used to reach one of the following states level. For example init 6 for reboot. Halt and turn the power off signal USR21 Go to single user mode signal TERM6 Reboot the machine signal INTc Block further logins signal TSTPq Rescan the ttys5 file signal HUPWindows. Start and stop a service with either the service name or service description shown in the Services Control Panel as follows. WSearch. net start WSearch start search service. Windows Search. net start Windows Search same as above using descr. Reset root password. Linux method 1. At the boot loader lilo or grub, enter the following boot option. The kernel will mount the root partition and init will start the bourne shell. Use the command passwd at the prompt to change the password and then reboot. Forget the single user mode as you need the password for that. If, after booting, the root partition is mounted read only, remount it rw. Free. BSD method 1. On Free. BSD, boot in single user mode, remount rw and use passwd. You can select the single user mode on the boot menu option 4 which is displayed for 1. The single user mode will give you a root shell on the partition. Unixes and Free. BSD and Linux method 2. Other Unixes might not let you go away with the simple init trick. The solution is to mount the root partition from an other OS like a rescue CD and change the password on the disk. Boot a live CD or installation CD into a rescue mode which will give you a shell. Find the root partition with fdisk e. Mount it and use chroot mount o rw devad. Kernel modules. Linux lsmod List all modules loaded in the kernel. To load a module here isdnFree. BSD kldstat List all modules loaded in the kernel. Flex. Pod Datacenter with Microsoft Exchange 2. F5 BIG IP and Cisco Application Centric Infrastructure Design Guide. Microsoft Exchange 2. Flex. Pod with Cisco ACI and F5 BIG IP LTM is a predesigned, best practice data center architecture that is built on the Cisco Unified Computing System Cisco UCS, the Cisco Nexus 9. F5 BIG IP Application Delivery Controller ADC and Net. App fabric attached storage FAS or V Series systems. The key design details and best practices to be followed for deploying this new shared architecture are covered in this design guide. This Exchange Server 2. Flex. Pod with VMware v. Sphere 5. 5 and Cisco Nexus 9. Application Centric Infrastructure ACI. The details for this infrastructure is not covered in this document, but can be found at the following link Flex. Pod Datacenter with Microsoft Exchange 2. F5 Bi. G IP, and Cisco Application Centric Infrastructure ACI Deployment Guide. Cisco Validated Designs include systems and solutions that are designed, tested, and documented to facilitate and improve customer deployments. These designs incorporate a wide range of technologies and products into a portfolio of solutions that have been developed to address the business needs of customers. Achieving the vision of a truly agile, application based data center requires a sufficiently flexible infrastructure that can rapidly provision and configure the necessary resources independently of their location in the data center. This document describes the Cisco solution for deploying Microsoft Exchange with Net. App Flex. Pod solution architecture, F5 BIG IP Local Traffic Manager LTM, and VMware v. Sphere 5. 5 Update 2 using Cisco Application Centric Infrastructure ACI. Cisco ACI is a holistic architecture that introduces hardware and software innovations built upon the new Cisco Nexus 9. Series product line. Cisco ACI provides a centralized policy driven application deployment architecture, which is managed through the Cisco Application Policy Infrastructure Controller APIC. Cisco ACI delivers software flexibility with the scalability of hardware performance. The audience of this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineers, and customers who want to take advantage of an infrastructure that is built to deliver IT efficiency and enable IT innovation. The Cisco Unified Computing System is a third generation data center platform that unites computing, networking, storage access, and virtualization resources into a cohesive system designed to reduce TCO and increase business agility. The system integrates a low latency, lossless 1. Gigabit Ethernet 1. Gb. E unified network fabric with enterprise class, x. The system is an integrated, scalable, multi chassis platform in which all resources participate in a unified management domain that is controlled and managed centrally. Figure 1 Cisco Unified Computing System. Figure 2 Cisco Unified Computing System Components. Figure 3 Cisco Unified Computing System. The main components of the Cisco UCS are Compute. The system is based on an entirely new class of computing system that incorporates blade servers based on Intel Xeon E5 2. Series Processors. Cisco UCS B Series Blade Servers work with virtualized and non virtualized applications to increase performance, energy efficiency, flexibility and productivity. Network. The system is integrated onto a low latency, lossless, 8. Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high performance computing networks which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables, and by decreasing the power and cooling requirements. Storage access. The system provides consolidated access to both storage area network SAN and network attached storage NAS over the unified fabric. By unifying storage access, Cisco UCS can access storage over Ethernet, Fiber Channel, Fiber Channel over Ethernet FCo. E, and i. SCSI. This provides customers with the options for setting storage access and investment protection. Additionally, server administrators can reassign storage access policies for system connectivity to storage resources, thereby simplifying storage connectivity and management for increased productivity. Management. The system uniquely integrates all system components which enable the entire solution to be managed as a single entity by the Cisco UCS Manager. The Cisco UCS Manager has an intuitive graphical user interface GUI, a command line interface CLI, and a robust application programming interface API to manage all system configuration and operations. The Cisco UCS is designed to deliver A reduced Total Cost of Ownership TCO, increased Return on Investment ROI and increased business agility. Increased IT staff productivity through just in time provisioning and mobility support. A cohesive, integrated system which unifies the technology in the data center. Tomtom Update Maps Free Download. The system is managed, serviced and tested as a whole. Scalability through a design for hundreds of discrete servers and thousands of virtual machines and the capability to scale IO bandwidth to match demand. Industry standards supported by a partner ecosystem of industry leaders. The Cisco UCS 5. 10. Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing System, delivering a scalable and flexible blade server chassis. The Cisco UCS 5. 10. Blade Server Chassis is six rack units 6. RU high and can mount in an industry standard 1. A single chassis can house up to eight half width Cisco UCS B Series Blade Servers and can accommodate both half width and full width blade form factors. Four single phase, hot swappable power supplies are accessible from the front of the chassis. These power supplies are 9. N 1 redundant and grid redundant configurations. The rear of the chassis contains eight hot swappable fans, four power connectors one per power supply, and two IO bays for Cisco UCS 2. XP Fabric Extenders. A passive mid plane provides up to 4. Gbps of IO bandwidth per server slot and up to 8. Gbps of IO bandwidth for two slots. The chassis is capable of supporting future 8. Gigabit Ethernet standards. Figure 4 Cisco Blade Server Chassis Front, Rear and Populated Blades ViewThe Cisco UCS B2. M4 Blade Server is a half width, two socket blade server. The system uses two Intel Xeon E5 2. Series Processors, up to 3. GB of DDR3 memory, two optional hot swappable small form factor SFF serial attached SCSI SAS disk drives, and two VIC adapters that provides up to 8. Gbps of IO throughput. The server balances simplicity, performance, and density for production level virtualization and other mainstream data center workloads. Figure 5 Cisco UCS B2. M4 Blade Server. A Cisco innovation, the Cisco UCS VIC 1. Gigabit Ethernet, FCo. E capable modular LAN on motherboard m. LOM designed exclusively for the M3 generation of Cisco UCS B Series Blade Servers. When used in combination with an optional port expander, the Cisco UCS VIC 1. Gigabit Ethernet. The Fabric interconnects provide a single point for connectivity and management for the entire system. Typically deployed as an active active pair, the systems fabric interconnects integrate all components into a single, highly available management domain controlled by Cisco UCS Manager. The fabric interconnects manage all IO efficiently and securely at a single point, resulting in deterministic IO latency regardless of a server or virtual machines topo logical location in the system. Cisco UCS 6. 20. 0 Series Fabric Interconnects support the systems 8. Gbps unified fabric with low latency, lossless, cut through switching that supports IP, storage, and management traffic using a single set of cables.