|
Two or more processes using shared memory (shm) communication, the total experience synchronization problems. Here the "synchronization" does not mean the process of reading and writing synchronization problems like this with a semaphore. Synchronization issues here that are synchronized Withdrawal, in the end who should quit, how to know each other quit. For example: The process is responsible for reading and writing database A, process B is responsible for processing the data. Then the process than the process B A Night quit the job because you want to save the data processed and the process B. But A does not know when to quit B ah. A, B are unrelated process, do not know each other's pid. They only read and write is associated with a shared memory. Under normal circumstances, the process of B written in the memory of a shared identity: You can quit the process of A, is also possible. But the process B may be abnormal exit, even the logo is too late to write. Secondly, the shared memory used for data communication, plus a logo so do not feel too good, feeling abused.
Using socket communication does not have this problem, because the process will lead to socket B exits how disconnected, even if it is timed out. But shm has no protocol to detect these acts, if they have done a bit too much trouble. Then start it from the shared memory.
Shared memory is managed by the kernel, the process deletes itself open a shared memory does not affect the shared memory of another process, even if it is the same piece of shared memory. This is because the kernel shared memory in a reference count, a process using the shared memory will cause the reference count is incremented. If one process calls the delete function, only this will really count is 0 delete shared memory. So, we need to finally quit the process of testing this count on it.
In System V shared memory, create a shared memory will initialize a structure:
struct shmid_ds {
struct ipc_perm shm_perm; / * Ownership and permissions * /
size_t shm_segsz; / * Size of segment (bytes) * /
time_t shm_atime; / * Last attach time * /
time_t shm_dtime; / * Last detach time * /
time_t shm_ctime; / * Last change time * /
pid_t shm_cpid; / * PID of creator * /
pid_t shm_lpid; / * PID of last shmat (2) / shmdt (2) * /
shmatt_t shm_nattch; / * No. of current attaches * /
...
};
Use shmctl function reads the structure, which is to use the shared memory shm_nattch number of processes.
However, there is now a new POSIX standard, of course, to use the new standard. Shared memory shm_open also has created "a process deletes itself to open shared memory does not affect the shared memory of another process" features. But the shared memory created by shm_open no longer have the above structure, the kernel is how to create a shared memory management shm_open? ? Look at the following source code:
/ * Shm_open - open a shared memory file * /
/ * Copyright 2002, Red Hat Inc. * /
#include < sys / types.h>
#include < sys / mman.h>
#include < unistd.h>
#include < string.h>
#include < fcntl.h>
#include < limits.h>
int
shm_open (const char * name, int oflag, mode_t mode)
{
int fd;
char shm_name [PATH_MAX + 20] = "/ dev / shm /";
/ * Skip opening slash * /
if (* name == '/')
++ Name;
/ * Create special shared memory file name and leave enough space to
cause a path / name error if name is too long * /
strlcpy (shm_name + 9, name, PATH_MAX + 10);
fd = open (shm_name, oflag, mode);
if (fd! = -1)
{
/ * Once open we must add FD_CLOEXEC flag to file descriptor * /
int flags = fcntl (fd, F_GETFD, 0);
if (flags> = 0)
{
flags | = FD_CLOEXEC;
flags = fcntl (fd, F_SETFD, flags);
}
/ * On failure, just close file and give up * /
if (flags == -1)
{
close (fd);
fd = -1;
}
}
return fd;
}
I cracking, which is to create a common document ah, just position created under / dev / shm (ie RAM on). Look at the delete function shm_unlink shared memory:
/ * Shm_unlink - remove a shared memory file * /
/ * Copyright 2002, Red Hat Inc. * /
#include < sys / types.h>
#include < sys / mman.h>
#include < unistd.h>
#include < string.h>
#include < limits.h>
int
shm_unlink (const char * name)
{
int rc;
char shm_name [PATH_MAX + 20] = "/ dev / shm /";
/ * Skip opening slash * /
if (* name == '/')
++ Name;
/ * Create special shared memory file name and leave enough space to
cause a path / name error if name is too long * /
strlcpy (shm_name + 9, name, PATH_MAX + 10);
rc = unlink (shm_name);
return rc;
}
This is just an ordinary unlink function. That is, POSIX shared memory is a standard file. The so-called "open a process deletes itself does not affect the shared memory shared memory of another process" is equivalent to using fstream objects you open a file, then go to the folder to delete the file (the file that is the unlink operation ), but fstream objects can also be correctly read and write files, and no reference count. This is just great, you can not withdraw from the process time and synchronized.
However, in linux how it will solve the problem? Solve only show themselves too much food. Since it is a file, then start from the file. That document what is an atomic operation, and can count it. Answer: Hard link. such as:
linuxidc@www.linuxidc.com: / dev / shm $ stat abc
File: "abc"
Size: 4: 8 IO Block: 4096 regular file
Equipment: 15h / 21d Inode: 5743159 hard links: 1
Permissions: (0664 / -rw-rw-r--) Uid :( 1000 / linuxidc) Gid :( 1000 / linuxidc)
Last Visit: 2015-01-2521: 27: 00.961053098 +0800
Recent changes: 2015-01-2521: 27: 00.961053098 +0800
Recent changes: 2015-01-2521: 27: 00.961053098 +0800
Created: -
linuxidc@www.linuxidc.com: / dev / shm $
This can be obtained through hard link fstat functions. But to achieve this, it means you need to create a shared memory, each time a reference to the need to call the process link function to create a hard link. Solve the problem, but this will be at / dev / shm plurality of N multiple files. This is RAM ah, although servers are more cattle, but this also is not okay. Well, there is a flock file locking. flock LOCK_SH parameters using multiple processes on the same document locking. Thus, when the process of B initializes the shared memory lock (there can be more than one such process), to unlock the exit (including abnormal exit) when. A process to detect the lock on exit. When you find no explanation can be safely out of the lock.
Synchronization out of the question basically solved. No time to write code to verify, next time.
PS: the kernel unlink should also know that there are currently no counting process open files, at what time should delete the file. This had to find information, to see with not need them. Also lsof This tool is able to detect all the open shared memory processes and the respective state. This should also have a corresponding api, but now not get to know. |
|
|
|