对于多数用户来说,都是面向边缘端的项目,实际上,日常监测任务也不要求特别高的训练精度,为了能够快速训练,这里选择Github中提到的YOLOv8n模型。
pip install ultralytics
官方给的demo似乎对于Pycharm用户会有小报错,使用那个网页版的jupyter notebook似乎不会,应该就是线程的问题
from ultralytics import YOLO# Load a model
model = YOLO("yolov8n.yaml") # build a new model from scratch
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)# Use the model
results = model.train(data="coco128.yaml", epochs=3) # train the model
from ultralytics import YOLO# Load a model
def main():model = YOLO("yolov8n.yaml") # build a new model from scratchmodel = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)# model.train(data="coco128.yaml", epochs=5)if __name__ == '__main__':main()
from ultralytics import YOLO# Load a model
def main():model = YOLO("yolov8n.yaml") # build a new model from scratchmodel = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)# model.train(data="coco128.yaml", epochs=5)results = model.val() # evaluate model performance on the validation setif __name__ == '__main__':main()
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs\detect\train9
Starting training for 5 epochs...Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size1/5 5.85G 1.213 1.429 1.258 215 640: 100%|██████████| 8/8 [00:05<00:00, 1.58it/s]Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 4/4 [00:36<00:00, 9.12s/it]all 128 929 0.668 0.54 0.624 0.461Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size2/5 6.87G 1.156 1.327 1.243 163 640: 100%|██████████| 8/8 [00:04<00:00, 1.64it/s]Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 4/4 [00:35<00:00, 8.91s/it]all 128 929 0.667 0.589 0.651 0.487Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size...
results:[{ '_keys': . at 0x0000023FA2FC0350>,'boxes': ultralytics.yolo.engine.results.Boxes
type: torch.Tensor
shape: torch.Size([6, 6])
dtype: torch.float32
tensor([[2.40000e+01, 2.26000e+02, 8.02000e+02, 7.58000e+02, 8.75480e-01, 5.00000e+00],[4.80000e+01, 3.97000e+02, 2.46000e+02, 9.06000e+02, 8.74487e-01, 0.00000e+00],[6.70000e+02, 3.79000e+02, 8.10000e+02, 8.77000e+02, 8.53311e-01, 0.00000e+00],[2.19000e+02, 4.06000e+02, 3.44000e+02, 8.59000e+02, 8.16101e-01, 0.00000e+00],[0.00000e+00, 2.54000e+02, 3.20000e+01, 3.25000e+02, 4.91605e-01, 1.10000e+01],[0.00000e+00, 5.50000e+02, 6.40000e+01, 8.76000e+02, 3.76493e-01, 0.00000e+00]], device='cuda:0'),'masks': None,'names': { 0: 'person',1: 'bicycle',...79: 'toothbrush'},'orig_img': array([[[122, 148, 172],[120, 146, 170],[125, 153, 177],...,...,[ 99, 89, 95],[ 96, 86, 92],[102, 92, 98]]], dtype=uint8),'orig_shape': (1080, 810),'path': '..\\bus.jpg','probs': None,'speed': {'inference': 30.916452407836914, 'postprocess': 2.992391586303711, 'preprocess': 3.988981246948242}}]
小结:
错误:This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':freeze_support()...The "freeze_support()" line can be omitted if the programis not going to be frozen to produce an executable.
【1】YOLOv8官方Github
【2】https://stackoverflow.com/questions/74035760/opencv-waitkey-throws-assertion-rebuild-the-library-with-windows-gtk-2-x-or
上一篇:three.js和WEBGL的关系 优劣对比 哪个更流行 在中国更好就业
下一篇:计算机视觉与深度学习 | Visual ChatGPT:微软开源视觉(图文)聊天系统——图像生成、迁移学习、边缘检测、颜色渲染等多功能(附代码下载链接)