Search

A surveillance system on Fedora Server

Share it

Utilizing Fedora Server can serve as a versatile platform for various applications, including acting as a server for home or business lab setups. Among the diverse applications that can operate on a Fedora Server is a Network Video Recorder (NVR), allowing the establishment of a Closed-Circuit Television (CCTV) surveillance system using IP cameras. Viseron, a user-friendly NVR application, offers easy configuration with minimal dependencies, such as in-memory or storage databases. It boasts extensive customization options with a range of components to select from, along with the ability to utilize hardware like Intel VAAPI and Google Coral Edge TPU for video processing tasks. Noteworthy features that make it perfect for CCTV include motion detection, object identification, license plate recognition, and facial detection. This article aims to guide users through setting up Viseron on Fedora Server with the support of Kubernetes.

Preparing the Infrastructure for Hosting the NVR Application

Assuming a new installation of Fedora Server, where the server is identified by the hostname server, and a user named user with sudo privileges exists, the subsequent steps are based on this initial setting.

Setting Up a Single Node Kubernetes Environment

Initially, configure the firewall to support the Kubernetes and Viseron installations:

# firewall-cmd --permanent --add-port=6443/tcp #apiserver
# firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 #pods
# firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 #services
# firewall-cmd --permanent --add-port=30888/tcp #node port we will use for viseron
# firewall-cmd --reload

Commence by installing K3s on the Fedora Linux system, a lightweight certified Kubernetes distribution perfect for single nodes. Execute the following command on the server to install the latest stable version of K3s:

$ curl -sfL https://get.k3s.io | sh -

To confirm a successful installation, run the subsequent command:

$ sudo k3s kubectl get node

NAME     STATUS   ROLES                  AGE    VERSION
server   Ready    control-plane,master   2m   v1.29.6+k3s2

To allow access to Kubernetes nodes from an external machine, copy the file /etc/rancher/k3s/k3s.yaml from the server to the local machine. Since the k3s.yaml file is not publicly readable on the server for security reasons, you need to first copy it to the home directory of the user on the server using sudo, and then employ scp to transfer it to the local machine. Alternatively, you may execute the following command snippet from the local machine to accomplish this in one go:

$ ssh -q user@server "sudo --stdin cat /etc/rancher/k3s/k3s.yaml" <<(read -s && echo $REPLY) > k3s.yaml

The command sequence entails:

  • Executing sudo to retrieve the k3s.yaml file
  • Redirecting the output to a local k3s.yaml file
  • Instructing sudo to display the password prompt on stderr to avoid its redirection to stdout
  • Activating a quiet mode for ssh to prevent alterations to the stdout redirection
  • Utilizing read -s to collect the sudo password without displaying it on the console
  • Routing the sudo password to the stdin of the ssh command

Few additional procedures are essential to exploit the k3s.yaml file for accessing the cluster. The original K3s installation generated a TLS certificate for several local hostnames where the node’s name, server, is included. The k3s.yaml file specifies the Kubernetes endpoint as https://127.0.0.1:6443. To allow access from the local machine, this should be altered to https://server:6443, which can be done using a preferred text editor or the sed command:

$ sed -i.bak 's/127.0.0.1/server/g' k3s.yaml

Subsequently, move the file to the ~/.kube directory and update its permissions:

$ mkdir ~/.kube
$ mv k3s.yaml ~/.kube/k3s.yaml
$ chmod 0600 ~/.kube/k3s.yaml

As the final step for cluster access, the kubectl tool needs to be installed on the local machine:

$ sudo dnf install kubernetes-client

Following this setup, you should be able to list the server node using the subsequent commands:

$ export KUBECONFIG=$HOME/.kube/k3s.yaml
$ kubectl get nodes

NAME     STATUS   ROLES                  AGE    VERSION
server   Ready    control-plane,master   10m   v1.29.6+k3s2

Configuring a Logical Volume Manager (LVM) Storage Provider

To cater to volumes required by the NVR application for storing videos and configurations, set up Logical Volumes Management (LVM). By default, K3s utilizes local-path as the storage provider:

$ kubectl get storageclass

NAME PROVISIONER (cut for brevity)
local-path (default) rancher.io/local-path

Though this setup affords flexibility, it lacks enforcement of limits, which may lead to disk space utilization-related issues. For utilizing available storage efficiently, the existing LVM volume group named fedora on the Fedora Server can be repurposed:

# vgs

VG #PV #LV #SN Attr VSize VFree
fedora 1 7 0 wz--n- 952g 937g

# lvs

LV VG Attr LSize (cut for brevity)
root fedora -wi-ao---- 15.00g

Observing the unutilized space in the fedora volume group, utilizing this additional capacity to provision Kubernetes volumes can be instrumental. Here, the OpenEBS LocalPV-LVM project emerges as a solution. LocalPV-LVM serves as a storage provider for LVMs on local nodes. To configure this storage provider, apply the provided Kubernetes manifest to the cluster:

$ kubectl apply -f https://openebs.github.io/charts/lvm-operator.yaml

Executing the above command and allowing some time for the setup, ensures that all necessary components are integrated into the cluster to implement the LVM storage provider. Confirmation of a successful deployment can be done using the subsequent commands:

$ kubectl -n kube-system get pod | grep openebs

openebs-lvm-controller-0 5/5 Running 0 1m
openebs-lvm-node-8mxj5 2/2 Running 0 1m

$ kubectl get crd | grep openebs

lvmnodes.local.openebs.io 2024-07-30T04:00:00Z
lvmsnapshots.local.openebs.io 2024-07-30T04:00:00Z
lvmvolumes.local.openebs.io 2024-07-30T04:00:00Z

As a final step, configure a custom Kubernetes storage class utilizing the new OpenEBS storage provider:

Prepare a fedora-vg-openebs-sc.yaml file with the following content:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-lvmpv #arbitrary storage class name
allowVolumeExpansion: true #allows for volume expansion
parameters:
  storage: "lvm"
  volgroup: "fedora" #name of the volume group in the server
provisioner: local.csi.openebs.io

Subsequently, deploy the configuration:

$ kubectl apply -f fedora-vg-openebs-sc.yaml

Verify the successful implementation with the following command:

$ kubectl get storageclass

NAME PROVISIONER (cut for brevity)
local-path (default) rancher.io/local-path
openebs-lvmpv local.csi.openebs.io

Installation of the Viseron NVR Application

In collaboration with the Viseron maintainer, a Helm chart has been crafted to facilitate the installation and management of Viseron within Kubernetes. Helm serves as a Kubernetes package manager.

For proceeding with the subsequent tasks, Helm needs to be installed following the official documentation or via dnf install helm.

Assuming an Intel-based server hardware with VAAPI capabilities, commence by installing the Viseron Helm repository and then deploying the application in the cluster with custom configurations:

$ helm repo add viseron https://roflcoopter.github.io/viseron-helm-chart
$ helm repo update

Create a viseron-values.yaml file with the following details:

securityContext:
  privileged: true # Privileges to use the /dev/dri device

service:
  type: NodePort
  nodePort: 30888 # The port where the app will be available on our server, opened with previously defined firewall-cmd rules

storage:
  config:
    className: "openebs-lvmpv" # Our storageclass
    size: 50Mi # Adequate space for storing configuration data
  data:
    className: "openebs-lvmpv" # Our storageclass
    size: 200Gi # Sufficient capacity allocation for video storage. Refer to official Viseron documentation for customization of retention policies (default is 7 days)

volumes:
  - name: dev-dri
    hostPath:
      path: /dev/dri # To expose this device as a volume to the containerized application

volumeMounts:
  - name: dev-dri
    mountPath: /dev/dri # Mounting the dev-dri volume to the /dev/dri path within the container

Subsequently, launch the Helm chart installation:

$ helm -n nvr upgrade --create-namespace --install --values viseron-values.yaml viseron viseron/viseron

Explanation of the arguments utilized in the helm command:

  • -n nvr: Specifies the Kubernetes namespace for the application
  • upgrade: Requests an upgrade for the release (release in Helm’s terms signifies installed apps)
  • –create-namespace: Creates the nvr namespace if it does not exist
  • –install: Installs (instead of upgrading) the release if it does not exist
  • –values viseron-values.yaml: Specifies the customized configuration file to use
  • viseron: The release name
  • viseron/viseron: The Helm chart employed

To update the application in the future, refresh the helm repository with helm repo update followed by re-running the installation command:

If the install process concludes successfully, the output should resemble this:

$ helm -n nvr history viseron

REVISION UPDATED STATUS CHART APP VERSION
1 Tue Jul... deployed viseron-0.1.2 2.3.1

$ kubectl -n nvr get pod

NAME READY STATUS RESTARTS AGE
viseron-5745654d57-9kcbz 1/1 Running 0 2m

$ kubectl -n nvr get pvc

NAME STATUS VOLUME (cut for brevity)
viseron-config Bound pvc-0492f99d-1b35-42e5-9725-af9c50fd0d63
viseron-data Bound pvc-4230c3fe-ae9d-471c-b0a7-0c0645e1449b

Upon inspection of the Viseron logs, it may indicate hardware acceleration platforms detection. Notably, in the documented scenario, VAAPI and OpenCL were recognized and available for usage:

$ kubectl -n nvr logs viseron-5745654d57-9kcbz

(cut for brevity)
************** Setting EdgeTPU permissions ***************
Coral Vendor IDs:
"1a6e"
"18d1"
No EdgeTPU USB device was found
No EdgeTPU PCI device was found
************************** Done **************************

****** Checking for hardware acceleration platforms ******
OpenCL is available!
VA-API is available!
CUDA cannot be used
(cut for brevity)

If /dev/dri had not been mounted in the container, the logs might indicate VA-API cannot be used.

Upon SSH access to the server, Kubernetes volumes should be visible as logical volumes:

# lvs

LV VG Attr LSize (cut for brevity)
pvc-0492f99d-1b35-42e5-9725-af9c50fd0d63 fedora -wi-ao---- 52.00m
pvc-4230c3fe-ae9d-471c-b0a7-0c0645e1449b fedora -wi-ao---- 200.00g
root fedora -wi-ao---- 15.00g

The Viseron application should be accessible at http://server:30888/. Upon the initial visit, it may display a notice indicating its readiness to connect to cameras. The default configuration serves as an example, pointing to non-existent cameras, with the possibility for alteration by visiting Menu / Administration / Configuration.

Conclusion

In this guide, the steps to deploy the Viseron NVR application on a Fedora server leveraging Kubernetes have been detailed. Key components integrated in this process include:

The subsequent stage involves camera addition, configuration of video processing functionalities, and setting up the recorder to commence automatic recording upon detecting noteworthy events.

🤞 Don’t miss these tips!

🤞 Don’t miss these tips!

Solverwp- WordPress Theme and Plugin