跳到主要内容

14 篇博文 含有标签「iot」

查看所有标签

· 阅读需 3 分钟

KubeEdge v1.17 is now available! This latest release introduces several new features and enhancements, including support for edge pods using InClusterConfig to access the Kubernetes API server, video streaming data reporting in Mappers, auto-restarting for EdgeCore modules, and more.

1.17 What's New

Release Highlights

Support Edge Pods Using InClusterConfig to Access Kube-APIServer

The InClusterConfig mechanism enables cloud pods to directly access the Kubernetes API server. With this release, KubeEdge now supports edge pods using the InClusterConfig mechanism to access the Kube-APIServer directly, even when the edge and cloud are in different network environments.

Refer to the link for more details. (#5524, #5541)

Mapper Supports Video Streaming Data Reporting

Previously, Mappers could only process structured device data. In v1.17, video stream data processing features have been added to the Mapper-Framework.

  • Edge Camera Device Management

    v1.17 provides a built-in Mapper based on the Onvif protocol, which can manage Onvif network camera devices into the KubeEdge cluster and obtain the camera's authentication file and RTSP video stream.

  • Video Stream Data Processing

    Video stream data processing capabilities have been introduced to the Mapper-Framework data plane. The video stream reported by edge camera devices can be saved as frame files or video files.

Refer to the link for more details. (#5448, #5514, mappers-go/#127)

Support Auto-Restarting for Edge Modules

EdgeCore modules could previously fail to start due to non-configurable and recoverable matters like process start order issues. In v1.17, the BeeHive framework has been improved to support automatically restarting modules. Users can now configure EdgeCore modules to automatically restart instead of restarting the entire component.

Refer to the link for more details. (#5509, #5513)

Introduce keadm ctl Command to Support Pods Query and Restart at Edge

The new keadm ctl command has been introduced in v1.17, allowing users to query and restart pods on edge nodes when they are offline:

  • Query: keadm ctl get pod [flags]
  • Restart: keadm ctl restart pod [flags]

Refer to the link for more details. (#5504)

Keadm Enhancements

Several enhancements were made to the keadm installation tool:

  • Refactored the keadm init command
  • Changed the command keadm generate to keadm manifest
  • Added image-repository flag to keadm join to support customization
  • Split the keadm reset command into keadm reset cloud and keadm reset edge.

Refer to the link for more details. (#5317)

Add MySQL to Mapper Framework

The Mapper Framework data plane now includes MySQL database support in its pushMethod. When using MySQL, basic configuration parameters for the MySQL client need to be added in the DeviceInstance.

Refer to the link for more details. (#5376)

Upgrade Kubernetes Dependency to v1.28.6

The vendored Kubernetes version has been upgraded to v1.28.6, users are now able to use the latest features on both the cloud and edge side.

Refer to the link for more details. (#5412)

Important Steps before Upgrading

  • To use the InClusterConfig feature for edge pods, you need to enable the metaServer and dynamicController switches, and set featureGates.requireAuthorization=true in the CloudCore and EdgeCore configuration files.

  • To use the Auto-Restarting for Edge Modules feature, you must enable the moduleRestart feature gate in EdgeCore.

Download the v1.17.0 release from the release page and upgrade today to take advantage of these new capabilities!

· 阅读需 15 分钟

北京时间2024年1月23日,KubeEdge发布1.16版本。新版本新增多个增强功能,在集群升级、集群易用性、边缘设备管理等方面均有大幅提升。

KubeEdge v1.16 新增特性:

新特性概览

集群升级:支持云边组件自动化升级

随着KubeEdge社区的持续发展,社区版本不断迭代;用户环境版本升级的诉求亟需解决。针对升级步骤难度大,边缘节点重复工作多的问题,v1.16.0版本的 KubeEdge 支持了云边组件的自动化升级。用户可以通过Keadm工具一键化升级云端,并且可以通过创建相应的Kubernetes API,批量升级边缘节点。

  • 云端升级

    云端升级指令使用了三级命令与边端升级进行了区分,指令提供了让用户使用更便捷的方式来对云端的KubeEdge组件进行升级。当前版本升级完成后会打印ConfigMap历史配置,如果用户手动修改过ConfigMap,用户可以选择通过历史配置信息来还原配置文件。我们可以通过help参数查看指令的指导信息:

    keadm upgrade cloud --help
    Upgrade the cloud components to the desired version, it uses helm to upgrade the installed release of cloudcore chart, which includes all the cloud components

    Usage:
    keadm upgrade cloud [flags]

    Flags:
    --advertise-address string Please set the same value as when you installed it, this value is only used to generate the configuration and does not regenerate the certificate. eg: 10.10.102.78,10.10.102.79
    -d, --dry-run Print the generated k8s resources on the stdout, not actual execute. Always use in debug mode
    --external-helm-root string Add external helm root path to keadm
    --force Forced upgrading the cloud components without waiting
    -h, --help help for cloud
    --kube-config string Use this key to update kube-config path, eg: $HOME/.kube/config (default "/root/.kube/config")
    --kubeedge-version string Use this key to set the upgrade image tag
    --print-final-values Print the final values configuration for debuging
    --profile string Sets profile on the command line. If '--values' is specified, this is ignored
    --reuse-values reuse the last release's values and merge in any overrides from the command line via --set and -f.
    --set stringArray Sets values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
    --values stringArray specify values in a YAML file (can specify multiple)

    升级指令样例:

    keadm upgrade cloud --advertise-address=<init时设置的值> --kubeedge-version=v1.16.0
  • 边端升级

    v1.16.0版本的KubeEdge支持通过NodeUpgradeJob的Kubernetes API进行边缘节点的一键化、批量升级。API支持边缘节点的升级预检查、并发升级、失败阈值、超时处理等功能。对此,KubeEdge支持了云边任务框架。社区开发者将无需关注任务控制、状态上报等逻辑实现,只需聚焦云边任务功能本身。

    升级API样例:

    apiVersion: operations.kubeedge.io/v1alpha1
    kind: NodeUpgradeJob
    metadata:
    name: upgrade-example
    labels:
    description: upgrade-label
    spec:
    version: "v1.16.0"
    checkItems:
    - "cpu"
    - "mem"
    - "disk"
    failureTolerate: "0.3"
    concurrency: 2
    timeoutSeconds: 180
    labelSelector:
    matchLabels:
    "node-role.kubernetes.io/edge": ""
    node-role.kubernetes.io/agent: ""
  • 兼容测试

    KubeEdge社区提供了完备了版本兼容性测试,用户在升级时仅需要保证云边版本差异不超过2个版本,就可以避免升级期间云边版本不一致带来的问题。

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/5330 https://github.com/kubeedge/kubeedge/pull/5229 https://github.com/kubeedge/kubeedge/pull/5289

支持边缘节点的镜像预下载

新版本引入了镜像预下载新特性,用户可以通过ImagePrePullJob的Kubernetes API提前在边缘节点上加载镜像,该特性支持在批量边缘节点或节点组中预下载多个镜像,帮助减少加载镜像在应用部署或更新过程,尤其是大规模场景中,带来的失败率高、效率低下等问题。

镜像预下载API示例:

apiVersion: operations.kubeedge.io/v1alpha1
kind: ImagePrePullJob
metadata:
name: imageprepull-example
labels:
description:ImagePrePullLabel
spec:
imagePrePullTemplate:
images:
- image1
- image2
nodes:
- edgenode1
- edgenode2
checkItems:
- "disk"
failureTolerate: "0.3"
concurrency: 2
timeoutSeconds: 180
retryTimes: 1

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/5310 https://github.com/kubeedge/kubeedge/pull/5331

支持使用Keadm安装Windows边缘节点

KubeEdge 1.15版本实现了在Windows上运行边缘节点,在新版本中,我们支持使用安装工具Keadm直接安装Windows边缘节点,操作命令与Linux边缘节点相同,简化了边缘节点的安装步骤。

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/4968

增加多种容器运行时的兼容性测试

新版本中新增了多种容器运行时的兼容性测试,目前已集成了containerddockerisuladcri-o 4种主流容器运行时,保障KubeEdge版本发布质量,用户在安装容器运行时过程中也可以参考该PR中的适配安装脚本。

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/5321

EdgeApplication中支持更多Deployment对象字段的Override

在新版本中,我们扩展了EdgeApplication中的差异化配置项(overriders),主要的扩展有环境变量、命令参数和资源。当您不同区域的节点组环境需要链接不同的中间件时,就可以使用环境变量(env)或者命令参数(command, args)去重写中间件的链接信息。或者当您不同区域的节点资源不一致时,也可以使用资源配置(resources)去重写cpu和内存的配置。

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/5262 https://github.com/kubeedge/kubeedge/pull/5370

支持基于Mapper-Framework的Mapper升级

1.16版本中,基于Mapper开发框架Mapper-Framework构建了Mapper组件的升级能力。新框架生成的Mapper工程以依赖引用的方式导入原有Mapper-Framework的部分功能,在需要升级时,用户能够以升级依赖版本的方式完成,简化Mapper升级流程。

  • Mapper-Framework代码解耦:

    1.16版本中将Mapper-Framework中的代码解耦为用户层和业务层。用户层功能包括设备驱动及与之强相关的部分管理面数据面能力,仍会随Mapper-Framework生成在用户Mapper工程中,用户可根据实际情况修改。业务层功能包括Mapper向云端注册、云端下发Device列表等能力,会存放在kubeedge/mapper-framework子库中。

  • Mapper升级框架:

    1.16版本Mapper-Framework生成的用户Mapper工程通过依赖引用的方式使用kubeedge/mapper-framework子库中业务层功能,实现完整的设备管理功能。后续用户能够通过升级依赖版本的方式达到升级Mapper的目的,不再需要手动修改大范围代码。

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/5308 https://github.com/kubeedge/kubeedge/pull/5326

DMI数据面内置集成Redis与TDEngine数据库

1.16版本中进一步增强DMI数据面中向用户数据库推送数据的能力,增加Redis与TDengine数据库作为内置数据库。用户能够直接在device-instance配置文件中定义相关字段,实现Mapper自动向Redis与TDengine数据库推送设备数据的功能,相关数据库字段定义为:

type DBMethodRedis struct {
// RedisClientConfig of redis database
// +optional
RedisClientConfig *RedisClientConfig `json:"redisClientConfig,omitempty"`
}
type RedisClientConfig struct {
// Addr of Redis database
// +optional
Addr string `json:"addr,omitempty"`
// Db of Redis database
// +optional
DB int `json:"db,omitempty"`
// Poolsize of Redis database
// +optional
Poolsize int `json:"poolsize,omitempty"`
// MinIdleConns of Redis database
// +optional
MinIdleConns int `json:"minIdleConns,omitempty"`
}
type DBMethodTDEngine struct {
// tdengineClientConfig of tdengine database
// +optional
TDEngineClientConfig *TDEngineClientConfig `json:"TDEngineClientConfig,omitempty"`
}
type TDEngineClientConfig struct {
// addr of tdEngine database
// +optional
Addr string `json:"addr,omitempty"`
// dbname of tdEngine database
// +optional
DBName string `json:"dbName,omitempty"`
}

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/5064

基于Mapper-Framework的USB-Camera Mapper实现

基于KubeEdge的Mapper-Framework,新版本提供了USB-Camera的Mapper样例,该Mapper根据USB协议的Camera开发,用户可根据该样例和Mapper-Framework更轻松地开发具体业务相关的Mapper。

在样例中提供了helm chart包,用户可以通过修改usbmapper-chart/values.yaml部署UBS-Camera Mapper,主要添加USB-Camera的设备文件, nodeName, USB-Camera的副本数,其余配置修改可根据具体情况而定,通过样例目录中的Dockerfile制作Mapper镜像。

global:
replicaCounts:
......
cameraUsbMapper:
replicaCount: 2 #USB-Camera的副本数
namespace: default
......
nodeSelectorAndDevPath:
mapper:
- edgeNode: "edgenode02" #USB-Camera连接的缘节点nodeName
devPath: "/dev/video0" #USB-Camera的设备文件
- edgeNode: "edgenode1"
devPath: "/dev/video17"
......

USB-Camera Mapper的部署命令如下:

helm install usbmapper-chart ./usbmapper-chart

更多信息可参考:

https://github.com/kubeedge/mappers-go/pull/122

易用性提升:基于Keadm的部署能力增强

  • 添加云边通信协议配置参数

    在KubeEdge v1.16.0中,使用keadm join边缘节点时,支持使用--hub-protocol配置云边通信协议。目前KubeEdge支持websocket和quic两种通信协议,默认为websocket协议。

    命令示例:

    keadm join --cloudcore-ipport <云节点ip>:10001 --hub-protocol=quic --kubeedge-version=v1.16.0 --token=xxxxxxxx

    说明:当--hub-protocol设置为quic时,需要将--cloudcore-ipport的端口设置为10001,并需在CloudCore的ConfigMap中打开quic开关,即设置modules.quic.enable为true。

    操作示例:使用kubectl edit cm -n kubeedge cloudcore,将quic的enable属性设置成true,保存修改后重启CloudCore的pod。

    modules:
    ......
    quic:
    address: 0.0.0.0
    enable: true #quic协议开关
    maxIncomingStreams: 10000
    port: 10001

    更多信息可参考:

    https://github.com/kubeedge/kubeedge/pull/5156

  • keadm join与CNI插件解耦

    在新版本中,keadm join边缘节点时,不需要再提前安装CNI插件,已将边缘节点的部署与CNI插件解耦。同时该功能已同步到v1.12及更高版本,欢迎用户使用新版本或升级老版本。

    说明:如果部署在边缘节点上的应用程序需要使用容器网络,则在部署完edgecore后仍然需要安装CNI插件。

    更多信息可参考:

    https://github.com/kubeedge/kubeedge/pull/5196

升级K8s依赖到v1.27

新版本将依赖的Kubernetes版本升级到v1.27.7,您可以在云和边缘使用新版本的特性。

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/5121

版本升级注意事项

新版本我们使用DaemonSet来管理边端的MQTT服务Eclipse Mosquitto了,我们能够通过云端Helm Values配置来设置是否要开启MQTT服务。使用DaemonSet管理MQTT后,我们可以方便的对边端MQTT进行统一管理,比如我们可以通过修改DaemonSet的配置将边端MQTT替换成EMQX。

但是如果您是从老版本升级到最新版本,则需要考虑版本兼容问题,同时使用原本由静态Pod管理的MQTT和使用新的DaemonSet管理的MQTT会产生端口冲突。兼容操作步骤参考:

  1. 您可以在云端执行命令,将旧的边缘节点都打上自定义标签
kubectl label nodes --selector=node-role.kubernetes.io/edge without-mqtt-daemonset=""
  1. 您可以修改MQTT DaemonSet的节点亲和性
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- ...
- key: without-mqtt-daemonset
operator: Exists
  1. 将节点MQTT改为由DaemonSet管理
# ------ 边端 ------
# 修改/lib/systemd/system/edgecore.service,将环境变量DEPLOY_MQTT_CONTAINER设置成false
# 这步可以放在更新EdgeCore前修改,这样就不用重启EdgeCore了
sed -i '/DEPLOY_MQTT_CONTAINER=/s/true/false/' /etc/systemd/system/edgecore.service

# 停止EdgeCore
systemctl daemon-reload && systemctl stop edgecore

# 删除MQTT容器,Containerd可以使用nerdctl替换docker
docker ps -a | grep mqtt-kubeedge | awk '{print $1}' | xargs docker rm -f

# 启动EdgeCore
systemctl start edgecore

# ------ 云端 ------
# 删除节点标签
kubectl label nodes <NODE_NAME> without-mqtt-daemonset

新版本的keadm join命令会隐藏with-mqtt参数,并且将默认值设置成false,如果您还想使用静态Pod管理MQTT,您仍然可以设置参数--with-mqtt来使其生效。with-mqtt参数在v1.18版本中将会被移除。

· 阅读需 4 分钟

On Jun 21, 2022 KubeEdge released v1.11, introducing several exciting new features and enhancements that significantly improve node group management, mapper development, installation experience, and overall stability.

v1.11 What's New

Release Highlights

Node Group Management

Users can now deploy applications to several node groups without writing deployment for every group. Node group management helps users to:

  • Manage nodes in groups

  • Spread apps among node groups

  • Run different versions of app instances in different node groups

  • Limit service endpoints in the same location as the client

Two new APIs have been introduced to implement Node Group Management:

  • NodeGroup API: represents a group of nodes that have the same labels.
  • EdgeApplication API: contains the template of the application organized by node groups, and the information on how to deploy different editions of the application to different node groups.

Refer to the links for more details (#3574, #3719).

Mapper SDK

Mapper-sdk is a basic framework written in Go. Based on this framework, developers can more easily implement a new mapper. Mapper-sdk has realized the connection to KubeEdge, provides data conversion, and manages the basic properties and status of devices, etc., as well as basic capabilities and abstract definition of the driver interface. Developers only need to implement the customized protocol driver interface of the corresponding device to realize the function of mapper.

Refer to the link for more details (#70).

Beta sub-commands in Keadm to GA

Some new sub-commands in Keadm have moved to GA, including containerized deployment, offline installation, etc. The original init and join behaviors have been replaced by the implementation from beta init and beta join:

  • CloudCore will be running in containers and managed by Kubernetes Deployment by default.

  • Keadm now downloads releases that are packed as container images to edge nodes for node setup.

  • init: CloudCore Helm Chart is integrated into init, which can be used to deploy containerized CloudCore.

  • join: Installing edgecore as a system service from a Docker image, no need to download from the GitHub release.

  • reset: Reset the node, clean up the resources installed on the node by init or join. It will automatically detect the type of node to clean up.

  • manifest generate: Generate all the manifests to deploy the cloud-side components.

Refer to the link for more details (#3900).

Deprecation of original init and join

The original init and join sub-commands have been deprecated as they had issues with offline installation, etc.

Refer to the link for more details (#3900).

Next-gen Edged to Beta: Suitable for more scenarios

The new version of the lightweight engine Edged, optimized from Kubelet and integrated into edgecore, has moved to Beta. The new Edged will still communicate with the cloud through the reliable transmission tunnel.

Refer to the link for more details (Dev-Branch for beta: feature-new-edged).

Important Steps before Upgrading

If you want to use Keadm to deploy KubeEdge v1.11.0, please note that the behaviors of the init and join sub-commands have been changed.

Other Notable Changes

  • Add custom image repo for keadm join beta (#3654)

  • Keadm: beta join support remote runtime (#3655)

  • Use sync mode to update pod status (#3658)

  • Make log level configurable for local up kubeedge (#3664)

  • Use dependency to pull images (#3671)

  • Move apis and client under kubeedge/cloud/pkg/ to kubeedge/pkg/ (#3683)

  • Add subresource field in application for API with subresource (#3693)

  • Add Keadm beta e2e (#3699)

  • Keadm beta config images: support remote runtime (#3700)

  • Use unified image management (#3720)

  • Use armhf as default for armv7/v6 (#3723)

  • Add ErrStatus in api-server application (#3742)

  • Support compile binaries with kubeedge/build-tools image (#3756)

  • Add min TLS version for stream server (#3764)

  • Adding security policy (#3778)

  • Chart: add cert domain config in helm chart (#3802)

  • Add domain support for certgen.sh (#3808)

  • Remove default KubeConfig for cloudcore (#3836)

  • Helm: Allow annotation of the cloudcore service (#3856)

  • Add rate limiter for edgehub (#3862)

  • Sync pod status immediately when status update (#3891)

· 阅读需 4 分钟

On Mar 7, 2022, KubeEdge released v1.10. The new version introduces several enhancements, significantly improving the installation experience, performance testing, network communication, and Kubernetes version compatibility.

v1.10 What's New

Release Highlights

Installation Experience Improvement with Keadm

Keadm adds some new sub-commands to improve the user experience, including containerized deployment, offline installation, etc. New sub-commands including: beta, config.

beta provides some sub-commands that are still in testing, but have complete functions and can be used in advance. Sub-commands including: beta init, beta manifest generate, beta join, beta reset.

  • beta init: CloudCore Helm Chart is integrated in beta init, which can be used to deploy containerized CloudCore.

  • beta join: Installing edgecore as system service from docker image, no need to download from github release.

  • beta reset: Reset the node, clean up the resources installed on the node by beta init or beta join. It will automatically detect the type of node to clean up.

  • beta manifest generate: Generate all the manifests to deploy the cloudside components.

config is used to configure kubeedge cluster, like cluster upgrade, API conversion, image preloading. Now the image preloading has supported, sub-commands including: config images list, config images pull.

  • config images list: List all images required for kubeedge installation.

  • config images pull: Pull all images required for kubeedge installation.

Refer to the links for more details. (#3517, #3540, #3554, #3534)

Preview version for Next-gen Edged: Suitable for more scenarios

A new version of the lightweight engine Edged, which is optimized from kubelet and integrated in edgecore, and occupies less resource. Users can customize lightweight optimization according to their needs.

Refer to the links for more details. (Dev-Branch for previewing: feature-new-edged)

Edgemark: Support large-scale KubeEdge cluster performance testing

Edgemark is a performance testing tool inherited from Kubemark. The primary use case of Edgemark is also scalability testing, it allows users to simulate edge clusters, which can be much bigger than the real ones.

Edgemark consists of two parts: real cloud part components and a set of "Hollow" Edge Nodes. In "Hollow" Edge Nodes, EdgeCore runs in container. The edged module runs with an injected mock CRI part that doesn't do anything. So the hollow edge node doesn't actually start any containers, and also doesn't mount any volumes.

Refer to the link for more details. (#3637)

EdgeMesh proxy tunnel supports quic

Users can choose edgemesh's proxy tunnel as quic protocol to transmit data. In edge scenarios, nodes are often in a weak network environment. Compared with the traditional tcp protocol, the quic protocol has better performance and QoS in the weak network environment.

Refer to the link for more details. (#281)

EdgeMesh supports proxy for udp applications

Some users' services use the udp protocol, and now edgemesh can also support the proxy of udp applications.

Refer to the link for more details. (#295)

EdgeMesh support SSH login between cloud-edge/edge-edge nodes

Edge nodes are generally distributed in the Private network environment, but it is often necessary to ssh login and operate the edge node. EdgeMesh provide a socks5proxy based on the tunnel inside EdgeMesh, which supports forwarding ssh requests from cloud/edge nodes to edge nodes.

Refer to the links for more details. (#258, #242)

Kubernetes Dependencies Upgrade

Upgrade the vendered kubernetes version to v1.22.6, users now can use the feature of new version on the cloud and on the edge side.

Refer to the link for more details. (#3624)

Important Steps before Upgrading

If you want to deploy the KubeEdge v1.10.0, please note that the Kubernetes dependency is 1.22.6.

Other Notable Changes

  • Remove dependency on os/exec and curl in favor of net/http (#3409, @mjlshen)

  • Optimize script when create stream cert (#3412, @gujun4990)

  • Cloudhub: prevent dropping volume messages (#3457, @moolen)

  • Modify the log view command after edgecore is running (#3456, @zc2638)

  • Optimize the iptables manager (#3461, @zhu733756)

  • Add script for build release (#3467, @gy95)

  • Using lateset codes to do keadm_e2e (#3469, @gy95)

  • Change the resourceType of msg issued by synccontroller (#3496, @Rachel-Shao)

  • Add a basic image for building various components of KubeEdge (#3513, @zc2638)

  • Supporting crossbuild all components (#3515, [@fisher

· 阅读需 3 分钟

The KubeEdge community is thrilled to announce the release of KubeEdge v1.12! This release introduces several exciting new features and enhancements, including alpha implementation of the next-generation Cloud Native Device Management Interface (DMI), a new version of the lightweight Edged engine, high-availability mode for EdgeMesh, edge node upgrades from the cloud, authorization for the Edge Kube-API endpoint, and more.

What's New in KubeEdge v1.12

Alpha Implementation of Next-Gen Cloud Native Device Management Interface (DMI)

DMI makes KubeEdge's IoT device management more pluggable and modular in a cloud-native way, covering Device Lifecycle Management, Device Operation, and Device Data Management.

  • Device Lifecycle Management: Simplifies IoT device lifecycle management, making it as easy as managing a pod.

  • Device Operation: Provides the ability to operate devices through the Kubernetes API.

  • Device Data Management: Separates device data management from device management, allowing data to be consumed by local applications or synchronized to the cloud through a special tunnel.

Next-Gen Edged Graduates to GA: Suitable for More Scenarios

The new version of the lightweight Edged engine, optimized from Kubelet and integrated into EdgeCore, has graduated to General Availability (GA) in this release. The new Edged will continue to communicate with the cloud through a reliable transmission tunnel, making it suitable for a wider range of scenarios.

Introducing High-Availability Mode for EdgeMesh

KubeEdge v1.12 introduces a high-availability mode for EdgeMesh. Unlike the previous centralized relay mode, the EdgeMesh HA mode can set up multiple relay nodes. When some relay nodes fail, other relay nodes can continue to provide relay services, avoiding single points of failure and improving system stability.

Support Edge Node Upgrade from the Cloud

KubeEdge v1.12 introduces the NodeUpgradeJob v1alpha1 API to upgrade edge nodes from the cloud. With this API and its associated controller, users can upgrade selected edge nodes from the cloud and roll back to the original version if the upgrade fails.

Support Authorization for Edge Kube-API Endpoint

Authorization for the Edge Kube-API Endpoint is now available in KubeEdge v1.12. Third-party plugins and applications that depend on Kubernetes APIs on edge nodes must use a bearer token to communicate with the kube-apiserver via the HTTPS server in MetaServer.

New GigE Mapper

KubeEdge v1.12 includes a new GigE Device Mapper with a Golang implementation, which is used to access GigE Vision protocol cameras.

Important Steps Before Upgrading

  • If you want to upgrade KubeEdge to v1.12, the configuration file in EdgeCore has been upgraded to v1alpha2. You must modify your configuration file for Edged in EdgeCore to adapt to the new Edged.

  • If you want to use authorization for the Edge Kube-API Endpoint, please enable the RequireAuthorization feature through the feature gate in both CloudCore and EdgeCore. If the RequireAuthorization feature is enabled, MetaServer will only serve HTTPS requests.

  • If you want to upgrade EdgeMesh to v1.12, you do not need to deploy the existing EdgeMesh-server, but you need to configure relayNodes.

  • If you want to run EdgeMesh v1.12 on KubeEdge v1.12 and use HTTPS requests to communicate with KubeEdge, you must set kubeAPIConfig.metaServer.security.enable=true.

KubeEdge v1.12 brings exciting new features and improvements to the edge computing ecosystem. We invite you to explore the release and provide feedback to the community. Happy edge computing!