Welcome to Veeam’s documentation!

Veeam’s Overview

_images/veeambanner.png
  • Veeam Software is a privately held, U.S. information technology company with a U.S. based leadership team.

  • Founded in 2006, we focused on simplifying backups for virtual machines. We quickly became the industry leader. Veeam continues to charge forward to innovate the industry so you can own, control and protect your data anywhere in the hybrid cloud.

  • In March 2020, Veeam was acquired by Insight Partners which has enable us to expand into new markets and continue our growth trajectory.

  • Veeam named a Leader for the 6th time!

_images/gartner.png

Veeam’s Vision

To be the most trusted provider of backup, recovery and data management solutions that deliver Modern Data Protection.

Veeam Product

Veeam Backup & Replication (aka VBR)

vbr Initially a product for VM protection, now central management & protection place with built-in agents and platform services.

_images/vbr01.png

Veeam One

veeamone Deliver deep, intelligent monitoring, reporting and automation through interactive tools and intelligent learning, identifying.

_images/veeamone01.png

Veeam Backup for Public Cloud(include AWS, GCP, AZURE)

vbpublic Cloud-native, web-based console for AWS/Azure, available via Marketplace.

_images/vbpublic01.png

Veeam Disaster Recovery Orchestrator(aka VDRO)

|vdro|A disaster recovery solution should be easy to configure, and easy to use.

_images/vdro01.png

Veeam Backup for O365(aka VBO)

vbo Retrieve Office 365 Exchange Online, SharePoint Online, OneDrive and Teams for Business data from a cloud-based instance of Office 365.

_images/vbo01.png

Veeam Service Provider Console(aka VSPC)

vspc Cloud-enabled platform for Veeam Cloud & Service Providers (VCSP) partners and distributed enterprise environments to deliver expert-built and managed Backup as a Service (BaaS) and Disaster Recovery as a Service (DRaaS).

_images/vspc01.png

Kasten K10(aka K10)

k10 Protect Your Fleet of Kubernetes Deployments with Multi-Cluster Support and Retain Control of and Access to Your Data.

_images/k1001.png

Backup Concept

Industry Rule 3-2-1 Plus V 1-0

_images/3-2-1.png

RPO and RTO

_images/recovery.png

Lab Design

_images/LabDemo.png

Lab Demo

Channels allows you to use WebSockets and other non-HTTP protocols in your Django site. For example you might want to use WebSockets to allow a page on your site to immediately receive updates from your Django server without using HTTP long-polling or other expensive techniques.

In this tutorial we will build a simple chat server, where you can join an online room, post messages to the room, and have others in the same room see those messages immediately.

Lab 1. Create a VeeamZIP Job

With Veeam Backup & Replication, you can quickly perform backup of one or several VMs with VeeamZIP. VeeamZIP is similar to a full VM backup.

The VeeamZIP job always produces a full backup file (VBK) that acts as an independent restore point. You can store the backup file to a backup repository, to a local folder on the backup server or to a network share.

If Veeam Backup and Replication isn’t already running, then double click the Veeam Backup and Replication Console icon located on the desktop.

Step-By-Step

Quickly create a point in time copy of one of your virtual machines using VeeamZIP.

  1. Open the Inventory view.

  2. In the infrastructure tree, choose VMware vSphere, vCenter Servers, vc.veeamdemo.local, Veeam Datacetner, VeeamCluster.

_images/lab01_01.png
  1. In the working area, right-click tinyLinux and select VeeamZIP…

_images/lab01_02.png
  1. In the open window in the Destination section, review a location (eg. VeeamRepo02-ReFS) to which you want to store the VeeamZIP file.

    Use the Delete this backup automatically list to specify retention settings for the created VeeamZIP file.

    Select ‘in 1 week’ from the drop-down list.

    By default, VeeamZIP files are not removed but are kept in the specified location for an indefinite period of time.

_images/lab01_03.png
  1. To review additional options for the VeeamZIP file, click More.

    As we did not select a password, Veeam Backup & Replication will produce an unencrypted VeeamZIP file. By default, Veeam Backup & Replication uses application-aware image processing to create a transactionally consistent backup of VMs running applications with VSS support. If you were backing up VMs that run something other than Windows OS or applications without VSS support, you could disable this option by clearing Disable guest quiescence checkbox

_images/lab01_04.png
  1. Click OK. The VeeamZIP job will start immediately.

    You can click Show Details to view the status of the VeeamZIP job. You may also click OK and continue with the labs. To monitor job progress, navigate to the Backup & Replication section, choose Last 24 hours, and then click Running

_images/lab01_05.png

VBR - Creating and Scheduling Backup Jobs

To back up VMs, you must configure a backup job. The backup job defines how, where and when to back up VM data.

One job can be used to process one or more VMs. Jobs can be started manually or scheduled to run automatically at a specific time.

Step-By-Step

Create a backup job to protect some of the virtual machines used in the lab environment.

  1. Click on HOME workspace, on menu bar, click Backup Job, Virtual Machine, Vmware vSphere

_images/lab02_01.png

2. At the first step of the wizard, enter Backup (your initials) as the Name. Keep the default Description and click Next.

_images/lab02_02.png

3. Click Add… to browse the VI infrastructure to review the selection criteria and select Veeam-DC01 and Tiny-Veeam. Click Add and Next.

_images/lab02_03.png _images/lab02_04.png
  1. Leave Automatic selection for Backup proxy.

  2. Confirm Main Backup Repository is selected as Backup repository in the drop down menu.

  3. Change the Restore points to keep on disk to 2.

_images/lab02_05.png
  1. Click Advanced to specify advanced options for the backup job.

  2. Leave Incremental selected under Backup mode and click OK and Next.

  3. Do not enable synthetic or active full: This way the backup chain will be created in the Forever Forward incremental backup mode.

_images/lab02_06.png
  1. From the Guest OS Credentials dropdown box, choose the Domain Administrator (veeamlabadministrator)..

  2. Click on the “Applications” button. Select Tiny-Veeam from the list and click Edit.

  3. Select the Disable application processing radio button. Click OK. And then click OK again.

    Tiny-Veeam is a linux VM so it does not have VSS framework on it, therefore we choose to disable application-aware image processing for this VM.

  4. Click Test Now and watch the test complete. Notice that Tiny-Veeam fails guest credentials. That’s to be expected and is ok.

  5. Click Close as the testing completes.

  6. Click Next to proceed.

  7. Schedule this job to run daily. Click APPLY to proceed. There is no option to schedule the automatic retry for jobs configured to start only manually.

  8. Click Finish

  9. Click Finish. Feel free to review the job by right clicking and selecting Edit. To keep the lab cleaned up for others, please delete your job when you’re done.

VBR - File Level Restore

We hava a Domain Controller VM Backup Policy (“Domain Controller Backup - Agentless”)

_images/lab03_01.png

Step-By-Step

Using Veeam Explore to browser your deleted/modified objects for restore.

  1. Click on Navigation, Backups, Disk. On the Right, choose “Domain Controller Backup - Agentless” - “DC01” Right Click - “Restore application items”, “Microsoft Active Directory objects…”

_images/lab03_02.png
  1. Select your restore point, and click “Next”

  2. Type your restore point reason, and click “Next”

images/lab03/lab02_03.png
  1. Click “Browse”, Veeam Explorer For Active Directory will be opened.

_images/lab02_04.png _images/lab02_04.png
  1. Leave Automatic selection for Backup proxy.

  2. Confirm Main Backup Repository is selected as Backup repository in the drop down menu.

  3. Change the Restore points to keep on disk to 2.

_images/lab02_05.png
  1. Click Advanced to specify advanced options for the backup job.

  2. Leave Incremental selected under Backup mode and click OK and Next.

  3. Do not enable synthetic or active full: This way the backup chain will be created in the Forever Forward incremental backup mode.

_images/lab02_06.png
  1. From the Guest OS Credentials dropdown box, choose the Domain Administrator (veeamlabadministrator)..

  2. Click on the “Applications” button. Select Tiny-Veeam from the list and click Edit.

  3. Select the Disable application processing radio button. Click OK. And then click OK again.

    Tiny-Veeam is a linux VM so it does not have VSS framework on it, therefore we choose to disable application-aware image processing for this VM.

  4. Click Test Now and watch the test complete. Notice that Tiny-Veeam fails guest credentials. That’s to be expected and is ok.

  5. Click Close as the testing completes.

  6. Click Next to proceed.

  7. Schedule this job to run daily. Click APPLY to proceed. There is no option to schedule the automatic retry for jobs configured to start only manually.

  8. Click Finish

  9. Click Finish. Feel free to review the job by right clicking and selecting Edit. To keep the lab cleaned up for others, please delete your job when you’re done.

VBR - Creating and Scheduling Backup Jobs

To back up VMs, you must configure a backup job. The backup job defines how, where and when to back up VM data.

One job can be used to process one or more VMs. Jobs can be started manually or scheduled to run automatically at a specific time.

Step-By-Step

Create a backup job to protect some of the virtual machines used in the lab environment.

  1. Click on HOME workspace, on menu bar, click Backup Job, Virtual Machine, Vmware vSphere

_images/lab02_01.png

2. At the first step of the wizard, enter Backup (your initials) as the Name. Keep the default Description and click Next.

_images/lab02_02.png

3. Click Add… to browse the VI infrastructure to review the selection criteria and select Veeam-DC01 and Tiny-Veeam. Click Add and Next.

_images/lab02_03.png _images/lab02_04.png
  1. Leave Automatic selection for Backup proxy.

  2. Confirm Main Backup Repository is selected as Backup repository in the drop down menu.

  3. Change the Restore points to keep on disk to 2.

_images/lab02_05.png
  1. Click Advanced to specify advanced options for the backup job.

  2. Leave Incremental selected under Backup mode and click OK and Next.

  3. Do not enable synthetic or active full: This way the backup chain will be created in the Forever Forward incremental backup mode.

_images/lab02_06.png
  1. From the Guest OS Credentials dropdown box, choose the Domain Administrator (veeamlabadministrator)..

  2. Click on the “Applications” button. Select Tiny-Veeam from the list and click Edit.

  3. Select the Disable application processing radio button. Click OK. And then click OK again.

    Tiny-Veeam is a linux VM so it does not have VSS framework on it, therefore we choose to disable application-aware image processing for this VM.

  4. Click Test Now and watch the test complete. Notice that Tiny-Veeam fails guest credentials. That’s to be expected and is ok.

  5. Click Close as the testing completes.

  6. Click Next to proceed.

  7. Schedule this job to run daily. Click APPLY to proceed. There is no option to schedule the automatic retry for jobs configured to start only manually.

  8. Click Finish

  9. Click Finish. Feel free to review the job by right clicking and selecting Edit. To keep the lab cleaned up for others, please delete your job when you’re done.

Feature

K10 Demo

Purpose built for Kubernetes, Kasten K10 is a Cloud Native data management platform for Day 2 operations. It provides enterprise DevOps teams with an easy to use, scalable and secure system for backup/restore, disaster recovery and application mobility for Kubernetes applications. Kasten K10 integrates with relational and NoSQL databases, all major Kubernetes distributions, and runs in any cloud to maximize freedom of choice.​

k10/images/k10/k10_01.png

K8S Setup

Kubernetes is an open-source platform for managing containers such as Docker. Is a management system that provides a platform for deployment automation. With Kubernetes, you can freely make use of the hybrid, on-premise, and public cloud infrastructure to run deployment tasks of your project.

And Docker lets you create containers for a pre-configured image and application. Kubernetes provides the next step, allowing you to balance loads between containers and run multiple containers across multiple systems.

This guidebook will walk you through How to Install Kubernetes on Ubuntu 20.04.

K8S Environment Setup

Using Vagrant to build the K8S Environment. This setup includes 1 master node and 2 worker nodes. 1

K8S_Host_Settings

Hostname

IP Address

vCPU

vRAM

vDisk

OS

k8s-m1

10.110.10.80

2

2

120G

generic/ubuntu2004

k8s-w1

10.110.10.81

4

4

120G

generic/ubuntu2004

k8s-w2

10.110.10.82

4

4

120G

generic/ubuntu2004

Setting the ENV variables Before running vagrant , please add ENV variables first.

Create .profile file and run source .profile

.profile:

export ESXI_HOSTNAME="host ip address"
export ESXI_USERNAME="username"
export ESXI_PASSWORD="password"

run following command to add ENV variables

source ~/.profile

Vagrantfile:

Vagrant.require_version ">= 1.6.0"

boxes = [
    {
        :name => "k8s-m1",
        :eth1 => "10.110.10.86",
        :netmask => "255.255.255.0",
        :mem => "4096",
        :cpu => "2"

    },
    {
        :name => "k8s-w1",
        :eth1 => "10.110.10.87",
        :mem => "4096",
        :netmask => "255.255.255.0",
        :cpu => "4"

    },
    {
        :name => "k8s-w2",
        :eth1 => "10.110.10.88",
        :netmask => "255.255.255.0",
        :mem => "4096",
        :cpu => "4"

    }
]

Vagrant.configure(2) do |config|

# config.vm.box = "ubuntu/jammy64"
config.vm.box = "generic/ubuntu2004"  #ubuntu 20.04  generic/ubuntu1804  ubuntu/focal64 bento/ubuntu-20.04
config.vm.box_download_insecure = true
boxes.each do |opts|
    config.vm.define opts[:name] do |config|
        config.vm.hostname = opts[:name]

        config.vm.provider "vmware_fusion" do |v|
        v.vmx["memsize"] = opts[:mem]
        v.vmx["numvcpus"] = opts[:cpu]
        end

        config.vm.provider "virtualbox" do |v|
        v.customize ["modifyvm", :id, "--memory", opts[:mem]]
        v.customize ["modifyvm", :id, "--cpus", opts[:cpu]]
        end

        config.vm.provider "vmware_esxi" do |v|
        v.esxi_hostname = ENV['ESXI_HOSTNAME']
        v.esxi_username = ENV['ESXI_USERNAME']
        v.esxi_password = ENV['ESXI_PASSWORD']
        # v.esxi_password = 'prompt:'
        v.esxi_virtual_network = ['vagrant-private', 'swguest110']
        v.esxi_disk_store = 'ESXI02_Datastore'
        v.guest_name = opts[:name]
        v.guest_username = 'vagrant'
        v.guest_memsize = opts[:mem]
        v.guest_numvcpus = opts[:cpu]
        v.guest_disk_type = 'thin'
        v.guest_boot_disk_size = '30'
        v.guest_nic_type = 'e1000'
        v.guest_virtualhw_version = '14'
        v.debug = 'true'

        # v.customize ["modifyvm", :id, "--memory", opts[:mem]]
        # v.customize ["modifyvm", :id, "--cpus", opts[:cpu]]
        end

        # config.vm.network :private_network, type: "dhcp"
        config.vm.network :public_network, ip: opts[:eth1], netmask: opts[:netmask], gateway: "10.110.10.254", dns: "10.110.10.101"
    end
end
config.vm.provision "shell", privileged: true, path: "./setup.sh"
end

K8S Setup

  1. Check Version for kubeadm, kubelet, kubectl

kubeadm version
kubelet --version
kubectl version
  1. Initizalize K8S cluster - do it on master node

  • –apiserver-advertise-address=master interface IP

  • –pod-network-cidr=your k8s pod network

sudo kubeadm init --apiserver-advertise-address=10.110.10.86  --pod-network-cidr=10.244.0.0/16
  1. Check joining cluster command

sudo kubeadm token create --print-join-command
  1. worker node join to cluster - do it on worker node

sudo kubeadm join 10.110.10.86:6443 --token 3a5thm.2046hzjtm7mlnj2i \
        --discovery-token-ca-cert-hash sha256:8303a5d9d2b8e758f34a9bbd0d971b288974d4045af47caa45c0cef3f29d3f30
  1. Setup kubectl ENV - do it on master node

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
source <(kubectl completion bash)
echo 'source <(kubectl completion bash)' >>~/.bashrc
  1. download flannel

wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
  1. edit kube-flannel.yml, add the line [ - –iface=eth1 ], apply kube-flannel.yml

_images/k10_02.png
kubectl apply -f kube-flannel.yml
  1. download helm installation script file

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
  1. install helm

./get_helm.sh
  1. helm add repo and install csi-driver-nfs

helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --namespace kube-system --version v4.1.0
  1. helm add ceph-csi repo

helm repo add ceph-csi https://ceph.github.io/csi-charts
kubectl create namespace "ceph-csi-rbd"
helm install --namespace "ceph-csi-rbd" "ceph-csi-rbd" ceph-csi/ceph-csi-rbd
  1. create csi-nfs storageclass

cat <<'EOF'> storageclass-csi-nfs.yaml | kubectl apply -f storageclass-csi-nfs.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-nfs
annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: nfs.csi.k8s.io
parameters:
server: 10.110.10.83
share: /nfs/export1/
# csi.storage.k8s.io/provisioner-secret is only needed for providing mountOptions in DeleteVolume
# csi.storage.k8s.io/provisioner-secret-name: "mount-options"
# csi.storage.k8s.io/provisioner-secret-namespace: "default"
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- nconnect=8  # only supported on linux kernel version >= 5.3
- nfsvers=4.1
EOF
  1. create csi-nfs storageclass

cat <<'EOF'> storageclass-csi-nfs-backup.yaml | kubectl apply -f storageclass-csi-nfs-backup.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-nfs-backup
provisioner: nfs.csi.k8s.io
parameters:
server: 10.110.10.83
share: /nfs/export2/
# csi.storage.k8s.io/provisioner-secret is only needed for providing mountOptions in DeleteVolume
# csi.storage.k8s.io/provisioner-secret-name: "mount-options"
# csi.storage.k8s.io/provisioner-secret-namespace: "default"
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- nconnect=8  # only supported on linux kernel version >= 5.3
- nfsvers=4.1
EOF
  1. create volumesnapshotclass, volumesnapshotcontent, volumesnapshotclass

kubectl create -f  https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-3.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl create -f  https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-3.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
kubectl create -f  https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-3.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
  1. volumestorageclass

cat <<'EOF'> volumestorageclass.yaml | kubectl apply -f volumestorageclass.yaml
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
  annotations:
    k10.kasten.io/is-snapshot-class: "true"
  name: csi-nfs-snap
driver: nfs.csi.k8s.io
deletionPolicy: Delete
EOF
  1. helm add repo and install kasten K10

kubectl create namespace kasten-io
helm repo add kasten https://charts.kasten.io/

helm install k10 kasten/k10 --namespace kasten-io \
  --set global.persistence.metering.size=20Gi \
  --set prometheus.server.persistentVolume.size=20Gi \
  --set global.persistence.catalog.size=20Gi \
  --set injectKanisterSidecar.enabled=true \
  --set injectKanisterSidecar.enabled=true \
  --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true \
  --set auth.tokenAuth.enabled=true \
  --set auth.basicAuth.htpasswd='admin:$apr1$nj8m0exb$RIkh3QZlbMUk4mXXHCTSG.'
  1. set k10 nodeport

cat > k10-nodeport-svc.yaml << EOF | kubectl apply -f k10-nodeport-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: gateway-nodeport
  namespace: kasten-io
spec:
  selector:
    service: gateway
  ports:
  - name: http
    port: 8000
    nodePort: 32000
  type: NodePort
EOF
  1. check kasten io

curl -s https://docs.kasten.io/tools/k10_primer.sh  | bash
  1. deploy shopping website

git clone https://github.com/microservices-demo/microservices-demo.git
cd microservices-demo/deploy/kubernetes
kubectl apply -f complete-demo.yaml
### run application using browser
## http://10.110.10.86:30001/
  1. check kasten io

kubectl label namespace generic k10/injectKanisterSidecar=true

V12 Update

Purpose built for Kubernetes, Kasten K10 is a Cloud Native data management platform for Day 2 operations. It provides enterprise DevOps teams with an easy to use, scalable and secure system for backup/restore, disaster recovery and application mobility for Kubernetes applications. Kasten K10 integrates with relational and NoSQL databases, all major Kubernetes distributions, and runs in any cloud to maximize freedom of choice.​

_images/v12_00.png

V12 Features

New capabilities introduced with V12 include:

  • Backups going direct to object storage and cloud-based agents are also available as cloud-accelerated features

  • With immutability everywhere, ransomware can be recovered, and threats against cyberattacks can be stopped even faster

  • Improves efficiency at scale with additional enterprise application support and innovations

  • A new Veeam Backup & Replication plug-in for Kasten by Veeam K10 V5.0 provides visibility and management for Kubernetes data protection.

Key Highlights

_images/v12_01.png
Core Architecture Improvements
More Option for VBR Database

Veeam is introducing a new database platform – PostgreSQL v14. Some of the reasons for doing so is first and foremost, like MSSQL Express, it’s free. But, from a use and scalability perspective, it has no size limit or compute restrictions, and has improved performance over SQL Express. SQL Express will still be an usable option if it’s your preference. PostgreSQL is only going to be in VBR and Enterprise Manager (EM) initially.

SQL Express limitations

  • 10 GB maximum database size

  • 4 cores maximum

  • 1 MB buffer cache

SQL Standard / Enterprise Edition

  • Too high costs

Postgres

  • Free

  • No database size or compute restrictions

  • Proven in other Veeam Products

  • Performance

move or copy backups with VeeaMover

VeeaMover The new VeeaMover feature allows to easily copy or move backups between different Repositories or Backup Jobs with one click.

Use cases

  • Move backups to different repository

  • Copy backups to different repository

  • Migrate ReFS to XFS for Hardened Repository

  • Migrate NTFS to ReFS

  • Re-balance Scale-Out Repository

  • Scale-Out Repository extent evacuation

V12 Update

Purpose built for Kubernetes, Kasten K10 is a Cloud Native data management platform for Day 2 operations. It provides enterprise DevOps teams with an easy to use, scalable and secure system for backup/restore, disaster recovery and application mobility for Kubernetes applications. Kasten K10 integrates with relational and NoSQL databases, all major Kubernetes distributions, and runs in any cloud to maximize freedom of choice.​

_images/v12_00.png

Indices and tables