Categories
Uncategorized

Make pdf document smaller, Mac and Linux

gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/screen -dNOPAUSE -dQUIET -dBATCH -sOutputFile=name-of-output-file.pdf inputfilename.pdf

Other options for dPDFSETTINGS=/screen
/printer
/ebook
/default

Categories
Uncategorized

CEPH operation commands

Removing a file system, pools:

cephfs is the name of the file system,
cephfs.cepfs.meta and cephfs.cephfs.data are the two pools

# Mark the fs as fail
ceph fs fail cephfs

#remove the fs
ceph fs rm cephfs --yes-i-really-mean-it

# Enable removal of pools
ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'

#Remove OSD pools
ceph osd pool rm cephfs.cephfs.meta cephfs.cephfs.meta --yes-i-really-really-mean-it --yes-i-really-really-mean-it-not-faking
ceph osd pool rm cephfs.cephfs.data cephfs.cephfs.data --yes-i-really-really-mean-it --yes-i-really-really-mean-it-not-faking

Remove OSD

ceph osd tree
  -5       18.19344     host ceph01                         
    3   hdd  3.63869         osd.0       up  1.00000 1.00000 
    4   hdd  3.63869         osd.1       up  1.00000 1.00000 
    5   hdd  3.63869         osd.2       up  1.00000 1.00000 

#Do the following for each osd
ceph osd out osd.0
ceph osd down osd.0
systemctl stop ceph-osd@1.service
# if you get this error, Error EBUSY: osd.0 is still up
# ssh to the host and run systemctl stop ceph-osd@0
ceph osd crush rm osd.0
ceph osd rm osd.0
ceph auth del osd.0

REMOVE server from the CRUSH map

Once all the OSD have been removed, it’s safe to remove the server

ceph osd crush rm ceph05

Remove LVM Partition on each server, i.e ceph01

ssh ceph01
lvscan
lvremove /dev/ceph-...
exit

Zap disks and add disks using ceph-deploy

cd physcluster # this is the dir where I built the cluster config from the initial setup
ceph-deploy disk zap ceph02 /dev/sdx # replace sdx with the appropriate letter

# Now add the OSDs
sudo ceph-deploy osd create ceph01 --data /dev/sdx # replace sdx with the appropriate letter

Enable NFS-Ganesha on ceph dashboard

ceph dashboard set-ganesha-clusters-rados-pool-namespace cephfs #this is the name of the filesystem previously created

Creating a new ceph user

Don’t use client.admin user keyrings with Ceph clients. Create a new client.cephfs on the Ceph cluster and will allow this user access to CephFS pools:

ceph auth get-or-create client.cephfs mon 'allow r' osd 'allow rwx pool=cephfs_metadata,allow rwx pool=cephfs_data' -o /etc/ceph/client.cephfs.keyring
ceph-authtool -p -n client.cephfs /etc/ceph/client.cephfs.keyring > /etc/ceph/client.cephfs
more /etc/ceph/client.cephfs