Video denoising for security and privacy in fog computing

  • Hong Zhang
  • , Yifan Yang*
  • , Ding Yuan
  • , Daniel Sun
  • , Jun Zhang
  • , Guoqiang Li
  • , Mingui Sun
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

To reduce heavy noise from degraded video in low or predictable latency and preserve privacy, a powerful and efficient video denoising algorithm is proposed based on fog computing for Visual Internet of Things. The conventional method is to remove noise in the cloud; however, this may overload computation and communication and raise security and privacy issues. The proposed denoising algorithm is distributed to heterogeneous devices at network edges to preserve privacy and avoid security risks as noise can be reduced in the fog rather than the cloud. To address the problems of latency, communication rate, and extremely heavy noise, structure registration, inter-frame and inner-frame filters, and distribution compensation are applied in the proposed algorithm. A scheme for encrypting the denoised data at network edges is provided so that security and privacy issues may be avoided during transmission and storage. Compared with other denoising approaches under extremely heavy noise conditions, the experimental results demonstrate that the proposed approach achieves superior denoising performance in terms of peak signal-noise ratio and visual quality at low computational cost, high bandwidth efficiency, and low-latency response in a fog computing manner.

Original languageEnglish
Article numbere4763
JournalConcurrency and Computation: Practice and Experience
Volume31
Issue number22
DOIs
StatePublished - 25 Nov 2019

Keywords

  • bi-directional infinite impulse response filter
  • fog computing
  • iterative closest set
  • privacy preservation
  • structural similarity
  • video denoising

Fingerprint

Dive into the research topics of 'Video denoising for security and privacy in fog computing'. Together they form a unique fingerprint.

Cite this