Skip to content
This repository has been archived by the owner on Oct 22, 2024. It is now read-only.

Commit

Permalink
Modify pmem-CSI naming as specified internally. (#247)
Browse files Browse the repository at this point in the history
Signed-off-by: Morales Quispe, Marcela <marcela.morales.quispe@intel.com>
  • Loading branch information
moralesq1 authored and okartau committed Apr 24, 2019
1 parent b63632e commit 28f1650
Show file tree
Hide file tree
Showing 4 changed files with 11 additions and 11 deletions.
10 changes: 5 additions & 5 deletions DEVELOPMENT.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ Network ports

Network ports are opened as configured in manifest files:

- registry endpoint: typical port value 10000, used for PMEM-CSI internal communication
- registry endpoint: typical port value 10000, used for pmem-CSI internal communication
- controller endpoint: typical port value 10001, used for serving CSI API


Expand Down Expand Up @@ -183,7 +183,7 @@ This produces the following binaries in the `_output` directory:

* `pmem-ns-init`: Utility for namespace initialization in DeviceMode:LVM
* `pmem-vgm`: Utility for creating logical volume groups over PMEM devices created, used in DeviceMode:LVM
* `pmem-csi-driver`: PMEM-CSI driver, used in both DeviceModes
* `pmem-csi-driver`: pmem-CSI driver, used in both DeviceModes


Stand-alone mode build potential issues
Expand Down Expand Up @@ -264,7 +264,7 @@ Notes about switching DeviceMode
================================

If DeviceMode is switched between LVM and Direct(ndctl), please keep
in mind that PMEM-CSI driver does not clean up or reclaim Namespaces,
in mind that pmem-CSI driver does not clean up or reclaim Namespaces,
therefore Namespaces plus other related context (possibly LVM state)
created in previous mode will remain stored on device and most likely
will create trouble in another DeviceMode.
Expand All @@ -286,7 +286,7 @@ Going from DeviceMode:Direct to DeviceMode:LVM

No special steps are needed to clean up Namespaces state.

If PMEM-CSI driver has been operating correctly, there should not be
If pmem-CSI driver has been operating correctly, there should not be
existing Namespaces as CSI Volume lifecycle should have been deleted
those after end of life of Volume. If there are, you can either keep
those (DeviceMode:LVM does honor "foreign" Namespaces and leaves those
Expand All @@ -296,7 +296,7 @@ using `ndctl` on node.
Notes about accessing system directories in a container
=======================================================

The PMEM-CSI driver will run as container, but it needs access to
The pmem-CSI driver will run as container, but it needs access to
system directories /sys and /dev. Two related potential problems have
been diagnosed so far.

Expand Down
6 changes: 3 additions & 3 deletions README-qemu-notes.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Notes about VM config for Pmem-CSI development environment
# Notes about VM config for pmem-CSI development environment

These are some notes about manually created VM configuration for pmem-csi development using emulated NVDIMM as host-backed files.
These are some notes about manually created VM configuration for pmem-CSI development using emulated NVDIMM as host-backed files.
There exists a newer, more convenient automated method using code in test/ directory, where a four node Kubernetes cluster
can be create by simply typing `make start`.

VM configuration described here was used in early pmem-csi development where a VM was manually created and then used as development host. The initial VM config was created by libvirt/GUI (also doable using virt-install CLI), with some configuration changes made directly in VM-config xml file to emulate a NVDIMM device backed by host file. Two emulated NVDIMMs were tried at some point, but operations on namespaces appear to be more reliable with single emulated NVDIMM.
VM configuration described here was used in early pmem-CSI development where a VM was manually created and then used as development host. The initial VM config was created by libvirt/GUI (also doable using virt-install CLI), with some configuration changes made directly in VM-config xml file to emulate a NVDIMM device backed by host file. Two emulated NVDIMMs were tried at some point, but operations on namespaces appear to be more reliable with single emulated NVDIMM.


## maxMemory
Expand Down
2 changes: 1 addition & 1 deletion deploy/kustomize/driver/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
The common parts for a PMEM-CSI driver deployment. Image versions and
The common parts for a pmem-CSI driver deployment. Image versions and
additional parameters for LVM vs. direct mode will be added in
overlays.
4 changes: 2 additions & 2 deletions deploy/kustomize/testing/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
This mixin for a regular production deployment of PMEM-CSI adds port
This mixin for a regular production deployment of pmem-CSI adds port
forwarding to the outside world:

The pmem-csi-controller-testing Service exposes the PMEM-CSI controller's
The pmem-csi-controller-testing Service exposes the pmem-CSI controller's
csi.sock as a TCP service with a dynamically allocated port, on any
node of the cluster. For this to work, the pmem-csi-controller has
to be patched with the controller-socat-patch.yaml. Due to
Expand Down

0 comments on commit 28f1650

Please sign in to comment.