目录
一、写在前面
1.1适用场景
1.2涉及到的库
二、函数库
2.1pyautogui-屏幕截图&鼠标操作
2.1.1屏幕截图screenshot函数
2.1.2鼠标移动及单击
2.2Opencv-模板匹配
2.2.1matchTemplate函数
2.2.2minMaxLoc函数
2.2.3相关代码
2.3base64-图片转base64
2.3.1在线转换网站
2.3.2转换代码
三、 实例
3.1需求
3.2解决思路
3.3代码
一、写在前面
1.1适用场景
部分需要检测相关软件处于特定状态后执行相应操作
1.2涉及到的库
屏幕状态自动检测:Opencv库->模板匹配
鼠标自动操作:pyautogui库->执行屏幕截图、鼠标移动及单击等
图片转Base64:base64库->避免使用本地图片封装程序
二、函数库
2.1pyautogui-屏幕截图&鼠标操作
2.1.1屏幕截图screenshot函数
该函数可用于对屏幕截图并对相关状态进行监测,默认为对全屏进行截图
img = pyautogui.screenshot()
img.save("test.jpg")
2.1.2鼠标移动及单击
可根据相关事件触发自动对鼠标执行相应操作
①pyautogui.moveTo(x, y, duration=0.0):将鼠标移动到指定的坐标(x, y)处。duration参数可选,用于指定移动的时间,如果设置为非零值,则会产生平滑移动的效果。
②pyautoguiclick(x=None, y=None, clicks=1, interval=0.0, button=left):模拟鼠标点击操作。通过指定坐标(x, y)来点击特定位置,clicks参数用于指定点击次数,interval参数用于指定每次点击的间隔时间,button参数可选,默认为左键。
pyautogui.moveTo(1500,100, duration=0.5)
pyautogui.click()
2.2Opencv-模板匹配
2.2.1matchTemplate函数
-
image:源图像,待匹配图像,8bit整数型、32bit浮点型,可以是单通道或多通道;
- templ:模板图像,类型同源图像,尺寸必须小于源图像;
- method:匹配方法;
参考资料:OpenCV-Python快速入门(十四):模板匹配_python 模板匹配-CSDN博客
2.2.2minMaxLoc函数
minMaxLoc()函数返回图像中的元素值的最小值和最大值,以及最小值和最大值的坐标。
- src:输入图像,必须为单通道图像;
- mask:掩码;
- minval, maxval, minloc, maxloc:依次为最小值,最大值,最小值的坐标,最大值的坐标;
2.2.3相关代码
#模板匹配输出位置
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
templateImg = cv2.cvtColor(templateImg, cv2.COLOR_BGR2RGB)
res = cv2.matchTemplate(img,templateImg,cv2.TM_CCOEFF_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
left_top = max_loc
2.3base64-图片转base64
2.3.1在线转换网站
在线转换网站:图片在线转换Base64 | 图片编码base64 (sojson.com)
转换过程如下图:
2.3.2转换代码
将base64编码转换为opencv库的图像格式:
def readb64(uri):
encoded_data = uri.split(',')[1]
nparr = np.fromstring(base64.b64decode(encoded_data), np.uint8)
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
return img
#图片-base64编码
templateImg = readb64("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAcCAYAAABh2p9gAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAAEnQAABJ0Ad5mH3gAAASZSURBVEhLzVVdaBxVFB5tUmJqiw0twagEGkWtfZBWQS2KbxYliOCLj4q+iVh8EXwRURRU/IForSBCrJFqohYjpFofqpQWVmi7QRPbpuZndjc7uzt/O7uzM3fu53fuTETExz54lpM7c+bc73z3O/feWBUvRJUuY8Xn8z+85YYInRBBLYRH9+189BwfXstBy3Ow7tuohzZa0RqidBWWQmGanhWe0vkh5hDQe3R5R0KXoAQkpxicpkLQ6Jp3yyD9VzKBJewXYWMbeUVBzWdXKooJERoBaVHhIV0AOXbDmKU0MqWgQwYyIsQdfidKNyeRcZD0duFiFjSh5U0AOaZt4ZOhy8UGqWdAczpMSAWYhVQKnTJOpkJMiMt0MctEpIxL5xenW4WHBtZwkaB5moDGILtCl65uM1UoFiFB/HvJokeLTlCJV9DEZSxjWn2N/aUHMHx6FLuW7sDulX3Ys3gnnnKexnGcYMYKC5KxzJe6BUXLVJAGsYJP1AXCTeNbPOY+ii3nB7EV25i0iT4AK7BwXfUGvIAXcYq/BuychWgvTjNLVmmGsBehxoR5lHDX+Vsx8tsAxvR29PmWAZJfH7ZgyB3BLeXdmAqnmL+Wo2zsDhq73EGaKnQ7EYt4OOx+jJvs67HT6cfWyMKgtrAZ22HFfbiK6ZvjTRi+MISDK89gXp/h4kiRq9QqF9HKqFqmKHso3QRu++VuXBMNYad7LfrI7Gqb7u+CVd0BS9jSh2qDGDndj1n1GepFqzs9EZOAPTbBc2X75vp+lE7gB3yH4dUbMYkjmE6m8ZWao65z+D6bwZvJy3g8HMeX6ghW2ZqcBkmKljQC9qDIsM3GLOASt+Iq9k6N4cFz4ziHJZaLWEhhsTJvJpT8UxidGcOsf9L0021cQNaNkWxsG9GyyT8eN7JLkTU1vX9iv/lYYyzb2H5Cv2EOKj5dmsG79aMkUONaWUi1zUYRo4bchpxgo0o+ZRyYOGA+BGTWJvsuM1OCZgLI554fs7DGE6WDeCs6zOAf9Cr8LD/UPHqcoGIybOKR2XG88+sHhOJMaqJ7iqAtCt/GOqMOGXW0oAOHwi/Q/9M2Zv5JmCX46cUc0PSGgN94R7Hj5O1UcZFYrCZatBXntvjW5C/gtsoLZfxWIfyeE3vxyuXXqUSdeXJ2CWhuiWQFb6y9hpfaE1SxzEAHSUtuFnkUUTxCMVMeJUbQgDC/8xgMTN7MYhLOLzlL9CgFZ3DP5/vwvvs2Pwr10IgvE82FmOR7TMDkcsqnSreAkeP34eHlZ/kmlQhY5+T3nEl8Uj5EsDLf1ukdeoQoZpJ0WBCIKYPgG6i6tLzO8gFGj91L/p5EYTW5/ofmnsSP1WN8bcNOmkaNlvINgJGkIGhzzTa55wwl2KTmy3j+5+fw4dlXTdSKmHKJXXJlk0UiEtM8zdVqxjr55AKwyRzPcKQgPBoJr+z1bJmLaFL7sybHSswNuGASdaGZYpMzTgp1sV3Nxs7MRCX85VgwJpe2JmiWOZzGq4zG24ZHjzpkpgM0GUxH/m0SFOFJVx4FTMLGEv7yJuX/pK6EFehXDFDrHPH/Dgj8BbLsDrYrBlhtAAAAAElFTkSuQmCC")
cv2.imshow('imageShow', templateImg)
cv2.waitKey(0)
cv2.destroyAllWindows()
转换结果如图:
三、 实例
3.1需求
手动流程:样品测试完毕后,打开防辐射盖,信号颜色由绿转红,放去样品,闭合防辐射盖,信号灯由红转绿,此时鼠标移动到软件相应位置点击并输入参数执行。
①识别下图红色框中信号状态(绿色表示正在设备防辐射盖闭合,红色表示防辐射盖开启)
②对屏幕进行定时截图,并确定鼠标需要移动位置在截图中的像素点
3.2解决思路
由于相关操作是在信号状态由红转绿后操作,故仅需对自动截屏图像进行模板匹配,识别当前状态是否为绿色,若为绿色,则执行相应鼠标操作。文章来源:https://www.toymoban.com/news/detail-855536.html
整体为2个While死循环,配合图像模板匹配结果在适当位置执行break语句跳出子死循环文章来源地址https://www.toymoban.com/news/detail-855536.html
3.3代码
#红色状态切出去
#红色状态运行程序
import base64
import cv2
import numpy as np
import time
import pyautogui
def readb64(uri):
encoded_data = uri.split(',')[1]
nparr = np.fromstring(base64.b64decode(encoded_data), np.uint8)
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
return img
#图片-base64编码
templateImg = readb64("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAcCAYAAABh2p9gAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAAEnQAABJ0Ad5mH3gAAASZSURBVEhLzVVdaBxVFB5tUmJqiw0twagEGkWtfZBWQS2KbxYliOCLj4q+iVh8EXwRURRU/IForSBCrJFqohYjpFofqpQWVmi7QRPbpuZndjc7uzt/O7uzM3fu53fuTETExz54lpM7c+bc73z3O/feWBUvRJUuY8Xn8z+85YYInRBBLYRH9+189BwfXstBy3Ow7tuohzZa0RqidBWWQmGanhWe0vkh5hDQe3R5R0KXoAQkpxicpkLQ6Jp3yyD9VzKBJewXYWMbeUVBzWdXKooJERoBaVHhIV0AOXbDmKU0MqWgQwYyIsQdfidKNyeRcZD0duFiFjSh5U0AOaZt4ZOhy8UGqWdAczpMSAWYhVQKnTJOpkJMiMt0MctEpIxL5xenW4WHBtZwkaB5moDGILtCl65uM1UoFiFB/HvJokeLTlCJV9DEZSxjWn2N/aUHMHx6FLuW7sDulX3Ys3gnnnKexnGcYMYKC5KxzJe6BUXLVJAGsYJP1AXCTeNbPOY+ii3nB7EV25i0iT4AK7BwXfUGvIAXcYq/BuychWgvTjNLVmmGsBehxoR5lHDX+Vsx8tsAxvR29PmWAZJfH7ZgyB3BLeXdmAqnmL+Wo2zsDhq73EGaKnQ7EYt4OOx+jJvs67HT6cfWyMKgtrAZ22HFfbiK6ZvjTRi+MISDK89gXp/h4kiRq9QqF9HKqFqmKHso3QRu++VuXBMNYad7LfrI7Gqb7u+CVd0BS9jSh2qDGDndj1n1GepFqzs9EZOAPTbBc2X75vp+lE7gB3yH4dUbMYkjmE6m8ZWao65z+D6bwZvJy3g8HMeX6ghW2ZqcBkmKljQC9qDIsM3GLOASt+Iq9k6N4cFz4ziHJZaLWEhhsTJvJpT8UxidGcOsf9L0021cQNaNkWxsG9GyyT8eN7JLkTU1vX9iv/lYYyzb2H5Cv2EOKj5dmsG79aMkUONaWUi1zUYRo4bchpxgo0o+ZRyYOGA+BGTWJvsuM1OCZgLI554fs7DGE6WDeCs6zOAf9Cr8LD/UPHqcoGIybOKR2XG88+sHhOJMaqJ7iqAtCt/GOqMOGXW0oAOHwi/Q/9M2Zv5JmCX46cUc0PSGgN94R7Hj5O1UcZFYrCZatBXntvjW5C/gtsoLZfxWIfyeE3vxyuXXqUSdeXJ2CWhuiWQFb6y9hpfaE1SxzEAHSUtuFnkUUTxCMVMeJUbQgDC/8xgMTN7MYhLOLzlL9CgFZ3DP5/vwvvs2Pwr10IgvE82FmOR7TMDkcsqnSreAkeP34eHlZ/kmlQhY5+T3nEl8Uj5EsDLf1ukdeoQoZpJ0WBCIKYPgG6i6tLzO8gFGj91L/p5EYTW5/ofmnsSP1WN8bcNOmkaNlvINgJGkIGhzzTa55wwl2KTmy3j+5+fw4dlXTdSKmHKJXXJlk0UiEtM8zdVqxjr55AKwyRzPcKQgPBoJr+z1bJmLaFL7sybHSswNuGASdaGZYpMzTgp1sV3Nxs7MRCX85VgwJpe2JmiWOZzGq4zG24ZHjzpkpgM0GUxH/m0SFOFJVx4FTMLGEv7yJuX/pK6EFehXDFDrHPH/Dgj8BbLsDrYrBlhtAAAAAElFTkSuQmCC")
#红色状态运行程序
while(True):
#识别屏幕
img = pyautogui.screenshot()
img = cv2.cvtColor(np.asarray(img), cv2.COLOR_RGB2RGBA)
#模板匹配输出位置
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
templateImg = cv2.cvtColor(templateImg, cv2.COLOR_BGR2RGB)
res = cv2.matchTemplate(img,templateImg,cv2.TM_CCOEFF_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
left_top = max_loc
print(left_top)
# time.sleep(2)
# h, w = templateImg.shape[:2]
# top_left = max_loc
# bottom_right = (top_left[0] + w, top_left[1] + h)
# cv2.rectangle(img, top_left, bottom_right, (255, 0, 0), 5)
# img = cv2.resize(img,(0,0),fx=0.8, fy=0.8)
# cv2.imshow('11111', img)
# cv2.waitKey(0)
# cv2.destroyAllWindows()
# print(left_top[0] > 1600,left_top[0] < 1620,left_top[1] < 300)
#识别到绿色状态
if left_top[0] > 1600 and left_top[0] < 1620 and left_top[1] < 300:
print("绿色状态")
pyautogui.moveTo(1500,100, duration=0.5)
pyautogui.click()
pyautogui.moveTo(1740,405, duration=0.5)
pyautogui.click()
time.sleep(3)
while(True):
#识别屏幕
img = pyautogui.screenshot()
img = cv2.cvtColor(np.asarray(img), cv2.COLOR_RGB2RGBA)
#模板匹配输出位置
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
templateImg = cv2.cvtColor(templateImg, cv2.COLOR_BGR2RGB)
res = cv2.matchTemplate(img,templateImg,cv2.TM_CCOEFF_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
left_top = max_loc
if left_top[0] > 1600 and left_top[0] < 1620 and left_top[1] < 300:
pass
# print("绿色状态again")
else:#等待检测到红色状态
# print("绿色跳转->红色")
break
到了这里,关于屏幕状态自动检测+鼠标自动操作的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!