Nfs mounting read only why
When NFS tries to access a soft-mounted directory, it gives up and returns an error message after trying retrans times see the retrans option, later. Any processes using the mounted directory will return errors if the server goes down. If a hard mount is interruptible, a user may press [CTRL]-C or issue the kill command to interrupt an NFS mount that is hanging indefinitely because a server is down.
If a foreground mount fails, it is retried again in the foreground until it succeeds or is interrupted. All automounted directories are mounted in the foreground; you cannot specify the bg option with automounted directories. Background mounts that fail are re-tried in the background, allowing the mount process to consider the mount complete and go on to the next one. If you have two machines configured to mount directories from each other, configure the mounts on one of the machines as background mounts.
That way, if both systems try to boot at once, they will not become deadlocked, each waiting to mount directories from the other. The bg option cannot be used with automounted directories.
It is useful for maintaining a standard, centralized set of device files, if all your systems are configured similarly. The nodevs option generates an error if a process on the NFS client tries to read or write to an NFS-mounted device file.
If an NFS request times out, this timeout value is doubled, and the request is retransmitted. After the NFS request has been retransmitted the number of times specified by the retrans option see below , a soft mount returns an error, and a hard mount retries the request.
These can certainly be stacked on top of each other a read-only server filesystem, NFS exported as read-only and mounted as read-only on clients but they don't have to be. For instance you can NFS export filesystems as read-only but mount them read-write on clients we do this here for complex reasons. Now let's talk about atime and atime updates. In NFS, atime updates are the responsibility of the server, not the clients.
More specifically they are generally the responsibility of the underlying server filesystem code or VFS , not specifically the NFS server code, and as such they can happen when you read data through a read-only NFS mount or even a read-only NFS export.
This implies that not all client reads necessarily update the server atime, because a client may satisfy a read from its own file cache instead of going to the server. If you think about it this is actually a feature. If you have atime enabled on a read-write filesystem mount, you have told the server kernel that you want to know when people read data from the filesystem and lo, this is exactly what you are getting.
Since you can export the same filesystem read-write to some clients and read-only to others, suppressing atime updates on read-only NFS exports could also produce odd effects.
Read a file from client A and the atime updates, read the file from client B and it doesn't. And all because you didn't trust client B enough to let it actually make filesystem level changes to your valuable filesystem. You might think that the NFS export process should notice when it's exporting a read-only filesystem as theoretically read-write and silently change it to read-only for you. One of the problems with this is that on many systems it's possible to switch filesystems back and forth between read-only and read-write status through various mechanisms not just mount.
In practice you might as well let the NFS server accept the write operations and have the VFS then reject them; the outcome is the same while the system is simpler and behaves better in the face of various things happening. Connect and share knowledge within a single location that is structured and easy to search.
I am working on a Ubuntu Server 64bit. I have mounted an nfs as rw, but whenever I try to edit anything on the mountpoint in question, I get a read-only filesystem error. I can connect to the nfs in question from other servers and read and write just fine. The only problem is on this server. I have tried mount -o remount,rw vnxnfs1. I have been testing from the root user on the machine with troubles, and writing to nfs works from root as well as user from other nfs-write-is-working server.
You have constraints on an exported root directory which may be causing the problem. As Brian said, a parent export can override a child export. But you can solve this by adding priorities to your exports. So, using Brian's example, this would solve the problem:. In my case the problem was an extra space.
It says rw, but the space before the bracket apparently breaks it silently:. Sign up to join this community. The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group.
0コメント