Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ RAID configuration and management under linux     - How to create a someone project on github (Linux)

- C language to view various types of data size (Programming)

- iOS used in the development --UITabBarController tag controller (Programming)

- Linux the best download manager uGet (Linux)

- PHP Performance Analysis and Experiment: Performance Micro Analysis (Programming)

- GoldenGate for Oracle data consistency initializing (Database)

- DB2 table space is redirected to restore the database combat (Database)

- Bash mathematical extension (Programming)

- Iptables application layer plug (Linux)

- Mybatis + binding Struts2: achieving user to insert and find (Programming)

- Write perfect printf (Programming)

- ASM Management - How to Rename diskgroup (Database)

- How Glances monitoring system on Ubuntu (Linux)

- Cacti installation deployment under CentOS 6.6 (Server)

- Hibernate Performance Optimization of reusing SessionFactory (Programming)

- RHEL 5.7 Yum configure local source [Errno 2] No such file or directory (Linux)

- Linux Fundamentals of the memory management mechanism (Linux)

- Use NTFS-3G to mount NTFS partitions under Linux U disk and removable hard disk (Linux)

- Ubuntu firewall installation and configuration (Linux)

- Ubuntu Linux to create and increase the Swap partition tutorial (Linux)

 
         
  RAID configuration and management under linux
     
  Add Date : 2017-03-12      
         
         
         
  One: experimental environment
1): Virtual Machine
2): Configure lingux system on a virtual machine
3): the use of linux system implementation Raid configuration
4): Add 6 fast hard disk in a virtual machine

Two: Experimental Target
1): proficiency in several commonly used Raid
2): For Raid0 Raid1 and Raid5 three kinds of Raid to master configuration commands
3): Understanding the differences and some common use Raid
4): several do not recognize common Raid
5): to understand and remember the test requirements of each Raid

Three: Experimental Procedure
1): Configure raid0
1: Environment:
Adding a hard drive sdb, two 1G primary partition. sdb1 and sdb2
2: Step
Sdb hard to create two primary partitions 1G
Creating RAID0
Export array profile
Formatted and mounted to the specified directory
Modify / etc / fstab to mount permanent
 
3: Experimental Procedure
1): sdb hard to create two primary partitions 1G
[Root @ abctest ~] # fdisk / dev / sdb # to enter its two primary partitions
n
p // create a primary partition
1 // primary partition is sdb1
+ 1G // to their space 1G
[Root @ localhost ~] # ll / dev / sdb * # view the partition, "*" after all partitions on behalf of sdb.
brw-rw ----. 1 root disk 8, 16. 6 Yue 28 20:13 / dev / sdb
brw-rw ----. 1 root disk 8, 17. 6 Yue 28 20:13 / dev / sdb1
brw-rw ----. 1 root disk 8, 18. 6 Yue 28 20:13 / dev / sdb2
[Root @ localhost ~] # ls / dev / sdb *
/ Dev / sdb / dev / sdb1 / dev / sdb2
 
## Two ways to view, you can clearly see that there are three partitions under / dev
 
2: Create the RAID0
[Root @ localhost ~] # mdadm -C -v / dev / md0 -l0 -n 2 / dev / sdb1 / dev / sdb2
# Create a name for the md0 array level 0 have two hard drives 1: / dev / sdb1 2: / dev / sdb2
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm:. array / dev / md0 started // array md0 created is running, the instructions to create success
 
[Root @ localhost ~] # mdadm -Ds # array just created scanned array named / dev / md0
ARRAY /dev/md0metadata=1.2 name = localhost.localdomain: 0 UUID = 0293bd32: 6821c095: 686fd2b9: 0471cbab
 
 [Root @ localhost ~] # mdadm -D / dev / md0 # View array specific information
  Number Major Minor RaidDevice State
      0 8 17 0 active sync / dev / sdb1
      1 8 18 1 active sync / dev / sdb2
 
 [Root @ localhost ~] # mdadm -Ds> /etc/mdadm.conf # generate raid profile point> /etc/mdadm.conf
[Root @ localhost ~] # cat! $ # View generated configuration file
cat /etc/mdadm.conf
ARRAY / dev / md0 metadata = 1.2name = localhost.localdomain: 0 UUID = 0293bd32: 6821c095: 686fd2b9: 0471cbab
 
[Root @ localhost ~] # fdisk / dev / md0 # array partitioning
 
[Root @ localhost ~] # ll / dev / md0 * # partition
brw-rw ----. 1 root disk 9, 0. 6 Yue 28 20:32 / dev / md0
brw-rw ----. 1 root disk 259, 0. 6 Yue 28 20:32 / dev / md0p1 # at the red, is the new separation zone
 
3: Format and mount to the specified directory
 
Just minutes to the District (/ dev / md0p1) format
[Root @ localhost ~] # mkfs.ext4 / dev / md0p1
Writing inode tables: completed
Creating journal (16384 blocks): complete
Writing superblocks and filesystemaccounting information: complete
Create a directory and mount
[Root @ localhost ~] # mkdir / raid0 # Create a Raid0 same file directory
[Root @ localhost ~] # mount / dev / md0p1 / raid0 # / dev / md0p1 mount / raid0 under
 
And boot mount
[Root @ localhost ~] # vim / etc / fstab
/ Dev / md0p1 / raid0 / ext4 defaults 0 0
save
 
Mount View
root @ localhost ~] # df -h
File system capacity has been available for use with the mount point %%
/ Dev / sda2 9.7G 3.2G 6.1G 35% /
tmpfs 1000M 264K 1000M 1% / dev / shm
/ Dev / sda1 194M 28M 157M 15% / boot
/ Dev / md0p1 2.0G 68M 1.9G 4% / raid0
Mount Success
Raid0 successfully created
2): Configure RAID1
1: Environment:
Create partitions: sdc1, sdc2, sdc3 size 1G: 2:
2: Step
Creating RAID1
Add hot spare 1G
Simulation of a disk failure, replace the failed drive automatic
Uninstall Array
3: Experimental Procedure
1: create a partition and view
[Root @ localhost ~] # fdisk / dev / sdc # Create partitions
[Root @ localhost ~] # ll / dev / sdc * # points in four district Find
brw-rw ----. 1 root disk 8, 32. 6 Yue 28 20:46 / dev / sdc
brw-rw ----. 1 root disk 8, 33. 6 Yue 28 20:46 / dev / sdc1
brw-rw ----. 1 root disk 8, 34. 6 Yue 28 20:46 / dev / sdc2
brw-rw ----. 1 root disk 8, 35. 6 Yue 28 20:46 / dev / sdc3
 
2: Create Raid1
[Root @ localhost ~] # mdadm-C -v / dev / md1 -l 1 -n 2 -x 1 / dev / sdc1 / dev / sdc2 / dev / sdc3
# Create a name for the array md1, grade 1 have three hard disk 1: / dev / sdc1 2 and 3
 
mdadm: size set to 1059222K
Continue creating array? Y # Select Y
mdadm: Defaulting to version 1.2 metadata
mdadm: array / dev / md1 started.
 
[Root @ localhost ~] # ll / dev / md1 # View array md1
brw-rw ----. 1 root disk 9, 1. 6 Yue 28 20:56 / dev / md1
 
md0 and md1 [root @ localhost ~] # cat / proc / mdstat # created are running
Personalities: [raid0] [raid1]
md1: active raid1 sdc3 [2] (S) sdc2 [1] sdc1 [0]
    1059222 blocks super 1.2 [2/2] [UU]
md0: active raid0 sdb2 [1] sdb1 [0]
    2117632 blocks super 1.2 512k chunks
 
[Root @ localhost ~] # mdadm -Ds> /etc/mdadm.conf # to generate the file specified
[Root @ localhost ~] # cat! $
cat /etc/mdadm.conf
ARRAY / dev / md0 metadata = 1.2name = localhost.localdomain: 0 UUID = 0293bd32: 6821c095: 686fd2b9: 0471cbab
ARRAY / dev / md1 metadata = 1.2 spares = 1name = localhost.localdomain: 1 UUID = f7c34545: ecab8452: d826598e: e68c64f3
 
Its partition, validation and formatting
[Root @ localhost ~] #fdisk / dev / md1 # partition
p
Partition number (1-4): 1
First cylinder (1-264805, default 1):
Using default value 1
Last cylinder, + cylinders or + size {K, M, G} (1-264805, default 264805):
Using default value 264805
Command (m for help): w
 
[Root @ localhost ~] # ll / dev / md1 * # verification
brw-rw ----. 1 root disk 9, 1. 6 Yue 28 21:13 / dev / md1
brw-rw ----. 1 root disk 259, 1. 6 Yue 28 21:13 / dev / md1p1
# For the partition md1 arrays are automatically separated md1p1 district in md1.
[Root @ localhost ~] # mkfs.ext4 / dev / md1p1 # Format
Writing inode tables: completed
Creating journal (8192 blocks): complete
Writing superblocks and filesystemaccounting information: complete
 
Create a directory and mount
[Root @ localhost ~] # mkdir / raid1 # Create a directory
[Root @ localhost ~] # mount / dev / md1p1 / raid1 # mount it under the directory raid1
[Root @ localhost ~] # df -h # view has been mounted
File system capacity has been available for use with the mount point %%
/ Dev / sda2 9.7G 3.2G 6.1G 35% /
tmpfs 1000M 288K 1000M 1% / dev / shm
/ Dev / sda1 194M 28M 157M 15% / boot
/ Dev / md0p1 2.0G 68M 1.9G 4% / raid0
/ Dev / md1p1 1019M 34M 934M 4% / raid1
# I md1p1 we see has been mounted to the lower raid1
 
[Root @ localhost ~] # cat / proc / mdstat # verification process checks under running
Personalities: [raid0] [raid1]
md1: active raid1 sdc3 [2] (S) sdc2 [1] sdc1 [0]
    1059222 blocks super 1.2 [2/2] [UU]

3: Fault Simulation
[Root @ localhost ~] # vim /etc/mdadm.conf
ARRAY / dev / md1 metadata = 1.2 spares = 1name = localhost.localdomain: 1 UUID = f7c34545: ecab8452: d826598e: e68c64f3
Idle -spares - spare hard drive

No fault before
Every 1.0s: cat / proc / mdstat Sun Jun 28 21: 41: 432015
Personalities: [raid0] [raid1]
md1: active raid1 sdc3 [2] (s) sdc2 [1] sdc1 [0]
    1059222 blocks super 1.2 [2/2] [UU]
# Here we can clearly see md1 arrays are in normal operation. sdc3 [2] (s) this one a hot spare disk, used to make backups
Let / dev / md1 under / dev / sdc1 hang
[Root @ localhost ~] # mdadm -f / dev / md1 / dev / sdc1
mdadm: set / dev / sdc1 faulty in / dev / md1
 
After the fault
Every 1.0s: cat / proc / mdstat Sun Jun 28 21:44:15 2015
Personalities: [raid0] [raid1]
md1: active raid1 sdc3 [2] sdc2 [1] sdc1 [0] (F)
    1059222 blocks super 1.2 [2/2] [UU]
This time, we can see sdc1 [0] followed by a (F) This means that the disk fail, and we saw just hot spare sdc3 [2] (s) are now back a no (S) of the he just directly replace the piece is running hard drive.
 [Root @ localhost ~] # mdadm -r / dev / md1 / dev / sdc1
The failed disk is removed
mdadm: hot removed / dev / sdc1 from / dev / md1 # meaning / dev / sdc1 removed from / dev / md1 inside
View
[Root @ localhost ~] # watch -n 1 cat / proc / mdstat
Every 1.0s: cat / proc / mdstat Sun Jun 28 21:50:15 2015
Personalities: [raid0] [raid1]
md1: active raid1 sdc3 [2] sdc2 [1]
    1059222 blocks super 1.2 [2/2] [UU]
There no longer appears a faulty sdc1
 
NOTE: The following documents need to be regenerated after removal, to prevent future problems.
[Root @ localhost ~] # mdadm -Ds> /etc/mdadm.conf # configure files generated
[Root @ localhost ~] # cat! $
cat /etc/mdadm.conf
ARRAY / dev / md0 metadata = 1.2name = localhost.localdomain: 0 UUID = 0293bd32: 6821c095: 686fd2b9: 0471cbab
ARRAY / dev / md1 metadata = 1.2name = localhost.localdomain: 1 UUID = f7c34545: ecab8452: d826598e: e68c64f3
/ Dev / md1 here (spares = 1) nor the hot backup.

3): RAID5 Configuration
1: Environment:
sde1, sde2, sde3, sde5, sde6 primary partition each 1G
2: the steps of:
1): stop array, the array reactivated
2): Remove the failed drive
3): stop array, the array reactivated
Combat:
Add new 1G hot spare disk array capacity expansion, extended from three to four disk
      Note: 2) and 3) do
3: Experimental Procedure
 
1: sde 1.2.3 partition primary partition as an extended partition 4, 5 and 6 are logical partitions
[Root @ localhost ~] # fdisk / dev / sde
[Root @ localhost ~] # ll / dev / sde *
brw-rw ----. 1 root disk 8, 64. 6 Yue 29 11:27 / dev / sde
brw-rw ----. 1 root disk 8, 65. 6 Yue 29 11:27 / dev / sde1
brw-rw ----. 1 root disk 8, 66. 6 Yue 29 11:27 / dev / sde2
brw-rw ----. 1 root disk 8, 67. 6 Yue 29 11:27 / dev / sde3
brw-rw ----. 1 root disk 8, 68. 6 Yue 29 11:27 / dev / sde4
brw-rw ----. 1 root disk 8, 69. 6 Yue 29 11:27 / dev / sde5
brw-rw ----. 1 root disk 8, 70. 6 Yue 29 11:27 / dev / sde6
 
2: Creating a RAID5
root @ localhost ~] # mdadm -C -v / dev / md5 -l 5 -n 3 -c 32 -x 1 / dev / sde {1,2,3,5} # 1.2.3 is a backup hard disk master 5
mdadm: size set to 1059202K
Continue creating array? Y
 
View running processes
[Root @ localhost ~] # cat / proc / mdstat
Personalities: [raid1] [raid0] [raid6] [raid5] [raid4]
md5: active raid5 sde3 [4] sde5 [3] (S) sde2 [1] sde1 [0]
    2118400 blocks super 1.2 level 5, 32k chunk, algorithm 2 [3/3] [UUU]
3: Generate profile
[Root @ localhost ~] # mdadm -Ds> /etc/mdadm.conf
[Root @ localhost ~] # cat! $
cat /etc/mdadm.conf
ARRAY / dev / md5metadata = 1.2 spares = 1 name = localhost.localdomain: 5UUID = 8475aa39: 504c7c9c: 71271abd: 49392980
 
4: Stop and verify md5
[Root @ localhost ~] # mdadm -S / dev / md5
mdadm: stopped / dev / md5
[Root @ localhost ~] # cat / proc / mdstat
Personalities: [raid1] [raid0] [raid6] [raid5] [raid4]
md0: active raid0 sdb1 [0] sdb2 [1]
    2117632 blocks super 1.2 512k chunks
md1: active raid1 sdc2 [1] sdc3 [2]
    1059222 blocks super 1.2 [2/2] [UU]
# Md5 this time has been stopped, and therefore there does not appear md5
 
5: md5 activate
[Root @ localhost ~] # mdadm -A / dev / md5
mdadm: / dev / md5 has been started with 3drives and 1 spare.
Process [root @ localhost ~] # cat / proc / mdstat # View running
Personalities: [raid6] [raid5] [raid4]
md5: active raid5 sde1 [0] sde5 [3] (S) sde3 [4] sde2 [1]
    2118400 blocks super 1.2 level 5, 32k chunk, algorithm 2 [3/3] [UUU]
# This time we can see md5 has been activated
 
6: partition to md5
 [Root @ laoyu ~] # fdisk / dev / md5
Command (m for help): p
Command (m for help): n
p
Partitionnumber (1-4): 1
Firstcylinder (1-529600, default 17):
Lastcylinder, + cylinders or + size {K, M, G} (17-529600, default 529600):
Command (m for help): w
Check verification
[Root @ laoyu ~] # ll / dev / md5 *
brw-rw ---- 1 root disk 9, 5 Jul 2 23:03 / dev / md5
brw-rw ---- 1 root disk 259, 0 Jul 2 23:03 / dev / md5p1
7: the new sub-district format
 [Root @ laoyu ~] # mkfs.ext4 / dev / md5p1
Writinginode tables: done
Creatingjournal (16384 blocks): done
Writingsuperblocks and filesystem accounting information: done
8: The new partition will be created md5 mount
[Root @ laoyu ~] # mkdir / raid5
[Root @ laoyu ~] # mount / dev / md5p1 / raid5
[Root @ laoyu ~] # df -h
Filesystem Size Used Avail Use% Mounted on
/ Dev / sda2 9.7G 3.9G 5.3G 42% /
tmpfs 996M 264K 996M 1% / dev / shm
/ Dev / sda1 2.0G 57M 1.8G 4% / boot
/ Dev / sr0 3.4G 3.4G 0 100% /media/RHEL_6.2 x86_64 Disc 1
/ Dev / md5p1 2.0G 68M 1.9G 4% / raid5
 
Combat:
Add new 1G hot spare disk array capacity expansion, extended from three to four disk
Now with four hard drives do Raid5 --- you need to add a hard drive

9: Adding a new hard drive sde6
[Root @ laoyu ~] # umount / raid5 # before you add, you must first uninstall
 
[Root @ localhost ~] # cat / proc / mdstat # Check process
Personalities: [raid6] [raid5] [raid4]
md5: active raid5 sde1 [0] sde5 [3] (S) sde3 [4] sde2 [1]
    2118400 blocks super 1.2 level 5, 32k chunk, algorithm 2 [3/3] [UUU]
This is before adding, There was no sde6 hard disk.
[Root @ localhost ~] # mdadm -a / dev / md5 / dev / sde6 # pointing md5 mean adding a new hard drive sde6
mdadm: added / dev / sde6 # added successfully
[Root @ localhost ~] # cat / proc / mdstat
Personalities: [raid6] [raid5] [raid4]
md5: active raid5 sde6 [5] (S) sde1 [0] sde5 [3] (S) sde3 [4] sde2 [1]
    2118400 blocks super 1.2 level 5, 32k chunk, algorithm 2 [3/3] [UUU]
This is added after seeing sde6've added
10: 4 to extend raid5 fast hard disk array
[Root @ localhost ~] # mdadm -G / dev / md5 -n 4 # "n" represents number, quantity. When just created points to 3, and now we point to 4, indicating there are four fast hard disk.
mdadm: Need to backup 192K of criticalsection ..
# It is extended, n is originally equal to 3 and n is equal to 4 now it is extended
[Root @ laoyu ~] # mdadm -Ds> /etc/mdadm.conf # generate the configuration file
[Root @ laoyu ~] # cat / proc / mdstat
Personalities: [raid6] [raid5] [raid4]
md5: active raid5 sde6 [5] sde3 [4] sde5 [3] (S) sde2 [1] sde1 [0]
      2118400 blocks super 1.2 level 5, 32kchunk, algorithm 2 [4/4] [UUUU]
      [> ....................] Reshape = 3.8% (40704/1059200) finish = 7.1min speed = 2388K / sec
# Here's a progress bar indicating the process to add.
 [Root @ localhost ~] # watch -n 1 cat / proc / mdstat # here in order to see the dynamic generation of file add the command
[Root @ laoyu ~] # mdadm -Ds> /etc/mdadm.conf
[Root @ laoyu ~] # cat! $
cat / etc / mdadm.conf
ARRAY / dev / md5 metadata = 1.2 spares = 1 name = laoyu: 5UUID = 5dfd47d2: a7dda97b: 1499e9e7: b950b8ca
-------------------------------------------------- -------------------------------------------------- ----------------------------- root @ laoyu ~] # df -h
Filesystem Size Used Avail Use% Mounted on
/ Dev / sda2 9.7G 3.9G 5.3G 42% /
tmpfs 996M 264K 996M 1% / dev / shm
/ Dev / sda1 2.0G 57M 1.8G 4% / boot
/ Dev / sr0 3.4G 3.4G 0 100% /media/RHEL_6.2 x86_64 Disc 1
/ Dev / md5p1 2.0G 68M 1.9G 4% / raid5
This time we see any course md5p1 capacity is 2G.
problem
Obviously add a hard drive to the expansion, but the query why the capacity or the original capacity of it?
A: The file system does not support dynamic expansion. Use only the newly added hard disk, do a partition.
4): Configure raid10
1: Method:
Create raid1, then use the device to create raid0 created raid1
2: Environment:
Raid1
Raid0
3: Experimental Procedure
1: Create four primary partitions
[Root @ laoyu ~] # fdisk / dev / sdf
[Root @ laoyu ~] # ls / dev / sdf *
/ Dev / sdf / dev / sdf1 / dev / sdf2 / dev / sdf3 / dev / sdf4
2 to create two underlying RAID >> two arrays md11 and md12 .. which was created under the Raid1
[Root @ laoyu ~] # mdadm -C -v / dev / md11 -l 1-n 2 / dev / sdf {1,2}
[Root @ laoyu ~] # mdadm -C -v / dev / md12 -l 1-n 2 / dev / sdf {3,4}
3 Create the upper RAID0, create an entire column md10
[Root @ laoyu ~] #mdadm -C -v / dev / md10 -l 0 -n 2 / dev / md {11,12} # It was created in raid0
mdadm: array / dev / md10 started.
 
[Root @ laoyu ~] # cat / proc / mdstat
Personalities: [raid6] [raid5] [raid4] [raid1] [raid0]
md10: active raid0 md12 [1] md11 [0] # md10 md12 and there md11 --- "" This is raid0
    2115584 blocks super 1.2 512k chunks
md12: active raid1sdf4 [1] sdf3 [0] # md12 has sdf3 and sdf4 ----- "" This is raid1
    1059254 blocks super 1.2 [2/2] [UU]
md11: active raid1 sdf2 [1] sdf1 [0] # md11 has sdf1 and sdf2 ---- "" What is raid1
    1059222 blocks super 1.2 [2/2] [UU]
 
[Root @ laoyu ~] # mdadm -Ds> /etc/mdadm.conf # generate the configuration file
[Root @ laoyu ~] # cat! $
cat / etc / mdadm.conf
ARRAY / dev / md5 metadata = 1.2 spares = 1 name = laoyu: 5UUID = 5dfd47d2: a7dda97b: 1499e9e7: b950b8ca
ARRAY / dev / md11 metadata = 1.2 name = laoyu: 11 UUID = 046f9eb3: 089b059e: f1166314: bece05da
ARRAY / dev / md12 metadata = 1.2 name = laoyu: 12 UUID = 5c948c81: 8cefadd6: 08486120: ae771a6d
ARRAY / dev / md10 metadata = 1.2 name = laoyu: 10 UUID = b61c6f65: 85ecaffb: 2cb12f1f: d76bf506
 
appendix
Batch stop --- uppercase S is stopped, the lowercase s scanning
[Root @ laoyu ~] # mdadm -Ss
mdadm: stopped / dev / md10
mdadm: stopped / dev / md12
mdadm: stopped / dev / md11
mdadm: stopped / dev / md5
# Before stopping, to all of the raid are uninstalled, you can not mount.
 
Volume Activation is then -As
[Root @ laoyu ~] # mdadm -As
mdadm: / dev / md5 has been started with 3drives and 1 spare.
mdadm: / dev / md11 has been started with 2drives.
mdadm: / dev / md12 has been started with 2drives.
mdadm: / dev / md10 has been started with 2drives.
 
5): RAID delete ---- To remove its configuration file
step
1: Uninstall already mounted raid
2: Stopping device
3: Delete Profile
4: Clear physical disk raid identified
Examples
Create raid:
[Root @ xuegod ~] #fdisk / dev / sda # sda5 partition to create their own look and sda6
[Root @ xuegod ~] #mdadm -C / dev / md1 -l 1 -n 2 / dev / sda5 / dev / sda6 # Create raid
[Root @ xuegod ~] #mdadm -Ds> /etc/mdadm.conf # generate configuration files.
 
Start Delete:
[Root @ xuegod ~] #umount / dev / md1 / mnt # If you have mounted raid, uninstall it.
[Root @ xuegod ~] #mdadm -Ss # stop raid device
[Root @ xuegod ~] #rm -rf /etc/mdadm.conf # delete raid profile
[Root @ xuegod ~] #mdadm --misc --zero-superblock / dev / sda {5,6} # Clear physical disk raid identified
 
Four: end of the experiment

Objective: To establish a software RAID disk under Red Hat Linux V6.1 operating system environment to achieve RAID function.
Methods: mining Raidtools range of tools Mkraid, Raid0run, Raiddstop, Raidstart etc. implement RAID functions.
Results: address security issues without configuring RAID card installed in a PC running the Linux operating system.
Conclusion: not only effectively reduce the investment cost of the hospital local area network server, but also to ensure the ideal network services.
     
         
         
         
  More:      
 
- Analysis of memory mapping process in Linux x86-64 mode (Linux)
- shell script: the number of characters in the text to print no more than 6 words (Programming)
- Linux Fundamentals of the text, data flow processing orders (Linux)
- SQL MySQL query table duplicate data (Database)
- Static member variable modified (Programming)
- Java uses JDBC connect database (Programming)
- See how --nand flash timing diagram of a read operation Comments (Programming)
- AngularJS notes --- Scope and controller (Programming)
- The direct insertion sort algorithm (Programming)
- Linux supports serial output method (Linux)
- To create a Linux server network security (Linux)
- Intruder tools Knark Analysis and Prevention Linux environment (Linux)
- Ubuntu Apache2 setting, problem solving css, pictures, etc. can not be displayed (Server)
- Kitematic how seamless and DockerFILE (Server)
- Ubuntu How to install Pacman (Linux)
- Android webView URL redirects affect goBack () (Programming)
- CentOS-based Kickstart automated installation practice (Linux)
- Oracle10g 64-bit installation solution in Windows 2008 Server R2 (Database)
- MyCAT log analysis (Database)
- DM9000 bare Driver Design (Programming)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.