【opencv】ffmpeg录制 + opencv绿屏识别脚本

这篇具有很好参考价值的文章主要介绍了【opencv】ffmpeg录制 + opencv绿屏识别脚本。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

【目的】

测试一款外设,长时间播放后偶尔会闪现绿屏。想着挂一晚上自动化检测,获取到绿屏出现的时间戳 + 画面截图。

【实现】

利用ffmpeg的录制 + 切片截图 + opencv图像分析
因为mac和windows的linux的执行命令不同,分开写了2个脚本
需要安装ffmpeg/ffplay、opencv库等

【代码】

mac端脚本

import os
from datetime import datetime

import cv2
import numpy as np
import json
import threading
import time
from multiprocessing import Process

pre_cmd1 = "mkdir recordingToolTmp"
os.popen(pre_cmd1).read()

pre_cmd1 = "mkdir recordingToolPicTmp"
os.popen(pre_cmd1).read()

class Job(threading.Thread):
    def __init__(self, ss_num, between_time, *args, **kwargs):
        super(Job, self).__init__(*args, **kwargs)
        self.__running = threading.Event()  # 用于停止线程的标识
        self.__running.set()  # 将running设置为True
        # self.running = True
        self.ss_num = ss_num
        self.between_time = between_time


    def run(self):
        while True:
            start_cmd = 'ffmpeg -f avfoundation -i "3:0" ./recordingToolTmp/Screen' + str(int(self.ss_num / self.between_time)) +'.ts'
            if self.__running.isSet():
                print("子线程运行中... ", time.time())
                os.popen(start_cmd).read()
                # 等待线程被kill的最大等待时间,例如:rtmp2是60s
                time.sleep(self.between_time * 2 + 1)
            else:
                print("子线程退出....")
                return
            print("in while True")

class Coo():
    def __init__(self):
        self.tmp_thread = None
    # 开启子进程
    def execute(self,ss_num,between_time):
        t = Job(ss_num,between_time)
        t.setDaemon(True)
        t.start()
        self.tmp_thread = t
        t.join()

class CustErr(Exception):
    pass

def main(ss_num,between_time):
    a = Coo()
    a.execute(ss_num,between_time)

class recordingTool():
    file_name = None
    f_file = None

    def check_device(self):
        check_cmd = 'ffmpeg -f avfoundation -list_devices true -i ""'
        os.popen(check_cmd).read()

    def test_length(self,url):
        info_cmd = "ffprobe -v quiet -print_format json -show_format -show_streams " + url
        data_json = os.popen(info_cmd).read()
        d = json.loads(data_json)
        duration = d["format"]["duration"]
        # file.write("\n" + "video length:" + str(duration) + "\n")
        word = "\n" + "video length:" + str(duration) + "\n"
        self.writeWordByHour(word)
        return duration

    def video_to_pic(self,url,i):
        pic_dir_cmd = "mkdir recordingToolPicTmp/" + str(i)
        os.popen(pic_dir_cmd).read()
        cmd = "ffmpeg -i " + url + " -r 5 -s 1289,720 -ss 00:00:00 ./recordingToolPicTmp/" + str(i) + "/%d.png"
        os.popen(cmd).read()

    def choose_color(self,color):
        if color == "white":
            lower_orange = [0, 0, 221]
            upper_orange = [180, 30, 255]
        if color == "gray":
            lower_orange = [0, 0, 100]
            upper_orange = [180, 43, 220]
        elif color == "green":
            # lower_orange = [35, 43, 46]
            # upper_orange = [77, 255, 255]
            lower_orange = [30, 65, 65]
            upper_orange = [80, 255, 255]
        elif color == "blue":
            lower_orange = [100, 43, 46]
            upper_orange = [124, 255, 255]
        elif color == "black":
            lower_orange = [0, 0, 0]
            upper_orange = [180, 255, 46]

        return lower_orange, upper_orange

    def writeWordByHour(self,word):
        if self.f_file is None:
            self.file_name = datetime.now().strftime("%Y-%m-%d-%H") + ".txt"
            make_file_cmd = "touch " + self.file_name
            os.popen(make_file_cmd).read()
            self.f_file = open('./' + self.file_name, 'w', encoding='utf-8')
        else:
            new_file_name = datetime.now().strftime("%Y-%m-%d-%H") + ".txt"
            if new_file_name != self.file_name:
                self.f_file.close()
                self.file_name = new_file_name
                make_file_cmd = "touch " + self.file_name
                os.popen(make_file_cmd).read()
                self.f_file = open('./' + self.file_name, 'w', encoding='utf-8')
        self.f_file.write(word)

    def closeFile(self):
        self.f_file.close()

    def test_video_opencv(self,url,i,color):
        start_time = time.time()

        self.video_to_pic(url,i)
        pic_len = len(os.listdir("./recordingToolPicTmp/" + str(i) + "/"))
        gray_num = 0
        gray_index = []
        for png_num in range(1, pic_len + 1):
            img = cv2.imread("./recordingToolPicTmp/" + str(i) + "/" + str(png_num) + ".png")
            # gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)  # 转换了灰度化
            # ret, img = cv2.threshold(gray, 160, 255, cv2.THRESH_BINARY)  # 将灰度图像二值化
            # img = 255 - img

            lower_orange_array,upper_orange_array = self.choose_color(color)
            lower_orange = np.array(lower_orange_array)
            upper_orange = np.array(upper_orange_array)

            hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)

            mask = cv2.inRange(hsv, lower_orange, upper_orange)
            # cv2.imshow('image', mask)
            # cv2.waitKey(0)

            binary = cv2.threshold(mask, 127, 255, cv2.THRESH_BINARY)[1]
            binary = cv2.dilate(binary, None, iterations=2)

            if int(cv2.__version__[0]) > 2:
                contours, _ = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
            else:
                _, contours, _ = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

            # 3-求轮廓的面积
            pic_sum = 0
            space = img.shape[0] * img.shape[1]
            for cts in contours:
                pic_sum += cv2.contourArea(cts)
            if color == "white" and pic_sum / space > 0.95:
                gray_num += 1
                gray_index.append(pic_sum)
                # file.write(color + " screen : ./recordingToolPicTmp/" + str(i) + "/" + str(png_num) + ".png" + "\n")
            if color == "green" and pic_sum / space >= 0.01:
                gray_num += 1
                gray_index.append(pic_sum)
                word = color + " screen : ./recordingToolPicTmp/" + str(i) + "/" + str(png_num) + ".png ," + str(datetime.now().strftime('%Y-%m-%d %H:%M:%S')) + "\n"
                mkdir_cmd = "mkdir ./tmp/" + str(i)
                os.popen(mkdir_cmd).read()
                cp_cmd = "cp ./recordingToolPicTmp/" + str(i) + "/" + str(png_num) + ".png ./tmp/" + str(i) + "/"
                os.popen(cp_cmd).read()
                cp_cmd = "cp ./recordingToolTmp/Screen" + str(i) + ".ts ./tmp"
                os.popen(cp_cmd).read()

                self.writeWordByHour(str(pic_sum/space))
                self.writeWordByHour(word)
            if color != "white" and pic_sum / space > 0.95:
                gray_num += 1
                gray_index.append(pic_sum)
                # file.write(color + " screen: ./recordingToolPicTmp/" + str(i) + "/" + str(png_num) + ".png" + "\n")

        end_time = time.time()
        print(f"test_video_opencv time consumption: {str(end_time - start_time)} seconds")
        return gray_num




if __name__ == '__main__':
    r = recordingTool()
    rm_cmd = "rm -rf ./recordingToolTmp/*"
    os.popen(rm_cmd).read()
    rm_cmd = "rm -rf ./recordingToolPicTmp/*"
    os.popen(rm_cmd).read()
    rm_cmd = "rm -rf ./tmp/*"
    os.popen(rm_cmd).read()


    between_time = 10
    num = 32

    for i in range(num):
        print(i)
        url = "./recordingToolTmp/Screen" + str(int(i / between_time - 1)) + ".ts"
        if i == 0:
            start_p = Process(target=main, args=(0, between_time))
            start_p.start()
        elif i % between_time == 0:
            read_cmd = "ps -ef | grep ffmpeg"
            process_info = os.popen(read_cmd).read()
            process_infos = process_info.split("\n")
            for info in process_infos:
                if info.find("Screen") > -1:
                    del_cmd = "kill -9 " + info.strip().split(" ")[1] + " " + info.strip().split(" ")[2]
                    os.popen(del_cmd).read()
            r.test_length(url)
            r.test_video_opencv(url, str(int(i / between_time - 1)), "green")
            rm_cmd = "rm -rf ./recordingToolPicTmp/" + str(int(i / between_time - 1)) + "/*"
            os.popen(rm_cmd).read()
            rm_cmd = "rm -rf ./recordingToolTmp/Screen" + str(int(i / between_time - 1)) + ".ts"
            os.popen(rm_cmd).read()


            start_p = Process(target=main, args=(i, between_time))
            start_p.start()
        if i == num - 1:
            r.writeWordByHour("last second1")
            r.closeFile()
            time.sleep(1)
            read_cmd = "ps -ef | grep ffmpeg | awk '{print $2,$3}'"
            process_info = os.popen(read_cmd).read()
            del_cmd = "kill -9 " + process_info.replace("\n", " ")
            os.popen(del_cmd).read()
        time.sleep(1)
    print("主线程结束退出")

windows端脚本文章来源地址https://www.toymoban.com/news/detail-518003.html

# – coding: utf-8 –
import os
import subprocess
import time
from datetime import datetime

import cv2
import numpy as np
import json
import threading
from time import sleep

pwd_cmd = os.popen("pwd").read()
print(pwd_cmd)
del_cmd1 = "rmdir /s /q .\\recordingToolPicTmp"
os.popen(del_cmd1).read()
del_cmd2 = "rmdir /s /q .\\recordingToolTmp"
os.popen(del_cmd2).read()
del_cmd = "rmdir /s /q .\\tmp"
os.popen(del_cmd).read()

class Job(threading.Thread):
    def __init__(self, ss_num, between_time, *args, **kwargs):
        super(Job, self).__init__(*args, **kwargs)
        self.__flag = threading.Event()  # 用于暂停线程的标识
        self.__flag.set()  # 设置为True
        self.__running = threading.Event()  # 用于停止线程的标识
        self.__running.set()  # 将running设置为True
        self.ss_num = ss_num
        self.between_time = between_time
        self.task = None

    def run(self):
        while self.__running.isSet():
            start_cmd = 'ffmpeg -f dshow -i video="screen-capture-recorder" ./recordingToolTmp/Screen' + str(int(self.ss_num / self.between_time)) + '.mp4'
            print("子线程运行中... ", time.time())
            self.task = subprocess.Popen(start_cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
            # self.task = os.popen(start_cmd).read()
            print(start_cmd)
            # 等待线程被kill的最大等待时间,例如:rtmp2是60s
            sleep(self.between_time * 2 + 1)
            print("in while True")
            self.__flag.wait()  # 为True时立即返回, 为False时阻塞直到self.__flag为True后返回

    def pause(self):
        print("thread is pause")
        self.__flag.clear()  # 设置为False, 让线程阻塞

    def resume(self):
        print("thread is resume")
        self.__flag.set()  # 设置为True, 让线程停止阻塞

    def exit(self):
        print("thread is stop")
        self.task.stdin.write('q'.encode("GBK"))
        self.task.communicate()
        self.task.kill()
        self.__flag.set()  # 将线程从暂停状态恢复, 如何已经暂停的话
        self.__running.clear()  # 设置为False

class Coo():
    def __init__(self):
        self.cur_thread = None

    # 开启子进程
    def execute(self,ss_num,between_time):
        self.cur_thread = Job(ss_num,between_time)
        self.cur_thread.setDaemon(False) ## True主进程执行完成程序终止会导致子进程直接终止 False 主进程执行完成后程序不终止,等待子进程全部执行完成
        self.cur_thread.start()
        # self.cur_thread.join() ## 开启后会阻塞主进程,等待子进程完成

    # 终止子进程
    def exit(self):
        self.cur_thread.exit()

class recordingTool():
    file_name = None
    f_file = None

    def check_device(self):
        check_cmd = 'ffmpeg -list_devices true -f dshow -i dummy'
        os.popen(check_cmd).read()
        print("CHECKOUT")

    def test_length(self,url):
        info_cmd = "ffprobe -v quiet -print_format json -show_format -show_streams " + url
        data_json = os.popen(info_cmd).read()
        d = json.loads(data_json)
        duration = d["format"]["duration"]
        # file.write("\n" + "video length:" + str(duration) + "\n")
        word = "\n" + "video length:" + str(duration) + "\n"
        self.writeWordByHour(word)
        return duration

    def video_to_pic(self,url,i):
        pic_dir_cmd = "mkdir recordingToolPicTmp\\" + str(i)
        os.popen(pic_dir_cmd).read()
        cmd = "ffmpeg -i " + url + " -r 1 -s 1289,720 -ss 00:00:00 ./recordingToolPicTmp/" + str(i) + "/%d.png"
        os.popen(cmd).read()

    def choose_color(self,color):
        if color == "white":
            lower_orange = [0, 0, 221]
            upper_orange = [180, 30, 255]
        if color == "gray":
            lower_orange = [0, 0, 100]
            upper_orange = [180, 43, 220]
        elif color == "green":
            # lower_orange = [35, 43, 46]
            # upper_orange = [77, 255, 255]
            lower_orange = [30, 65, 65]
            upper_orange = [80, 255, 255]
        elif color == "blue":
            lower_orange = [100, 43, 46]
            upper_orange = [124, 255, 255]
        elif color == "black":
            lower_orange = [0, 0, 0]
            upper_orange = [180, 255, 46]

        return lower_orange, upper_orange

    def writeWordByHour(self,word):
        if self.f_file is None:
            self.file_name = datetime.now().strftime("%Y-%m-%d-%H-%M") + ".txt"
            make_file_cmd = "type nul> " + self.file_name
            os.popen(make_file_cmd).read()
            self.f_file = open('./' + self.file_name, 'w', encoding='utf-8')
        else:
            new_file_name = datetime.now().strftime("%Y-%m-%d-%H-%M") + ".txt"
            if new_file_name != self.file_name:
                self.f_file.close()
                self.file_name = new_file_name
                make_file_cmd = "type nul> " + self.file_name
                os.popen(make_file_cmd).read()
                self.f_file = open('./' + self.file_name, 'w', encoding='utf-8')
        self.f_file.write(word)

    def closeFile(self):
        self.f_file.close()

    def closeFfmpeg(self,proc):
        if (proc != None):
            proc.StandardInput.WriteLine("q")
            proc.StandardInput.AutoFlush = True

    def test_video_opencv(self,url,i,color):
        start_time = time.time()

        self.video_to_pic(url,i)
        pic_len = len(os.listdir("./recordingToolPicTmp/" + str(i) + "/"))
        gray_num = 0
        gray_index = []
        for png_num in range(1, pic_len + 1):
            img = cv2.imread("./recordingToolPicTmp/" + str(i) + "/" + str(png_num) + ".png")
            # gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)  # 转换了灰度化
            # ret, img = cv2.threshold(gray, 160, 255, cv2.THRESH_BINARY)  # 将灰度图像二值化
            # img = 255 - img

            lower_orange_array,upper_orange_array = self.choose_color(color)
            lower_orange = np.array(lower_orange_array)
            upper_orange = np.array(upper_orange_array)

            hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)

            mask = cv2.inRange(hsv, lower_orange, upper_orange)
            # cv2.imshow('image', mask)
            # cv2.waitKey(0)

            binary = cv2.threshold(mask, 127, 255, cv2.THRESH_BINARY)[1]
            binary = cv2.dilate(binary, None, iterations=2)

            if int(cv2.__version__[0]) > 2:
                contours, _ = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
            else:
                _, contours, _ = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

            # 3-求轮廓的面积
            pic_sum = 0
            space = img.shape[0] * img.shape[1]
            for cts in contours:
                pic_sum += cv2.contourArea(cts)
            if color == "white" and pic_sum / space > 0.95:
                gray_num += 1
                gray_index.append(pic_sum)
                # file.write(color + " screen : ./recordingToolPicTmp/" + str(i) + "/" + str(png_num) + ".png" + "\n")
            if color == "green" and pic_sum / space >= 0.01:
                gray_num += 1
                gray_index.append(pic_sum)
                word = color + " screen : ./recordingToolPicTmp/" + str(i) + "/" + str(png_num) + ".png ," + str(datetime.now().strftime('%Y-%m-%d %H:%M:%S')) + "\n"
                mkdir_cmd = "mkdir .\\tmp\\" + str(i)
                os.popen(mkdir_cmd).read()
                cp_cmd_1 = "cp .\\recordingToolPicTmp\\" + str(i) + "\\" + str(png_num) + ".png .\\tmp\\" + str(i)
                os.popen(cp_cmd_1).read()
                cp_cmd = "cp .\\recordingToolTmp\\Screen" + str(i) + ".mp4 .\\tmp\\"
                os.popen(cp_cmd).read()
                # self.writeWordByHour(str(pic_sum/space))
                self.writeWordByHour(word)
            if color != "white" and pic_sum / space > 0.05:
                gray_num += 1
                gray_index.append(pic_sum)
                word = color + " screen: ./recordingToolPicTmp/" + str(i) + "/" + str(png_num) + ".png" + "\n"
                self.writeWordByHour(word)

        end_time = time.time()
        print(f"test_video_opencv time consumption: {str(end_time - start_time)} seconds")
        return gray_num




if __name__ == '__main__':

    pre_cmd1 = "mkdir recordingToolTmp"
    os.popen(pre_cmd1).read()

    pre_cmd1 = "mkdir recordingToolPicTmp"
    os.popen(pre_cmd1).read()

    pre_cmd1 = "mkdir tmp"
    os.popen(pre_cmd1).read()

    r = recordingTool()


    between_time = 5
    num = 17

    obj = Coo()
    for i in range(num):
        print(i)
        url = "./recordingToolTmp/Screen" + str(int(i / between_time - 1)) + ".mp4"
        if i == 0:
            obj.execute(0, between_time)
        elif i % between_time == 0:
            obj.exit()
            obj = Coo()
            r.test_length(url)
            r.test_video_opencv(url, str(int(i / between_time - 1)), "green")

            del_cmd1 = "rmdir /s /q .\\recordingToolPicTmp\\" + str(int(i / between_time - 1))
            delcmd1 = os.popen(del_cmd1).read()
            del_cmd2 = "del /s /q .\\recordingToolTmp\\Screen" + str(int(i / between_time - 1)) + ".mp4"
            delcmd2 = os.popen(del_cmd2).read()

            obj.execute(i, between_time)
        if i == num-1:
            r.writeWordByHour("last second1")
            obj.exit()
            r.closeFile()
            # kill_all_ffmpeg_cmd = "taskkill /f /im ffmpeg.exe"
            # os.popen(kill_all_ffmpeg_cmd).read()
            # sleep(1)
            kill_all_python_cmd = "taskkill /f /im python.exe"
            os.popen(kill_all_python_cmd).read()
            sleep(1)
            ppid_cmd = os.getppid()
            kill_all_os_cmd = "taskkill /f " + str(ppid_cmd)
            os.popen(kill_all_os_cmd).read()
        sleep(1)
    print("主线程结束退出")

到了这里,关于【opencv】ffmpeg录制 + opencv绿屏识别脚本的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 使用ffmpeg调用电脑自带的摄像头和扬声器录制音视频

    1、打开cmd,执行 chcp 65001 ,修改cmd的编码格式为utf8,避免乱码 2、执行指令 ffmpeg -list_devices true -f dshow -i dummy ,查看当前window的音频和视频名称 3、打开windows系统的\\\"打开声音设置\\\"–“麦克风隐私设置”–\\\"允许应用访问你的麦克风\\\"点击开启 录制视频: 录制音频: 录制音视频:

    2024年02月04日
    浏览(42)
  • 音视频开发常见问题(四):视频花屏和绿屏

    本文介绍了视频视频花屏/绿屏问题的常见原因,如 丢失关键帧、metadata的变化、硬件编解码的兼容性问题和颜色格式不一致问题 。以及排查方法和解决策略,包括检查视频数据格式、排查自采集/自渲染模块问题、联系第三方音视频SDK技术支持等。最后,还介绍了即构即构

    2024年02月08日
    浏览(34)
  • ffmpeg把RTSP流分段录制成MP4,如果能把ffmpeg.exe改成ffmpeg.dll用,那音视频开发的难度直接就降一个维度啊

    比如,原来我们要用ffmpeg录一段RTSP视频流转成MP4,我们有两种方案: 方案一:可以使用以下命令将rtsp流分段存储为mp4文件 ffmpeg -i rtsp://example.com/stream -vcodec copy -acodec aac -f segment -segment_time 3600 -reset_timestamps 1 -strftime 1 output_%Y-%m-%d_%H-%M-%S.mp4 方案二:可以直接调用ffmpeg库avcode

    2024年02月10日
    浏览(41)
  • 【音视频处理】基础框架介绍,FFmpeg、GStreamer、OpenCV、OpenGL

    大家好,欢迎来到停止重构的频道。  本期我们介绍 音视频处理的基础框架 。 包括FFmpeg、GStreamer、OpenCV、OpenGL 。 我们按这样的分类介绍 : 1、编解码处理:FFmpeg、GStreamer 2、图像分析:OpenCV 3、复杂图像生成:OpenGL 首先是编解码处理的基础框架,这类基础框架的 应用场景

    2024年02月08日
    浏览(37)
  • opencv+ffmpeg+QOpenGLWidget开发的音视频播放器demo

        本篇文档的demo包含了 1.使用OpenCV对图像进行处理,对图像进行置灰,旋转,抠图,高斯模糊,中值滤波,部分区域清除置黑,背景移除,边缘检测等操作;2.单纯使用opencv播放显示视频;3.使用opencv和openGL播放显示视频;4.在ffmpeg解码后,使用opencv显示视频,并支持对视

    2024年02月12日
    浏览(51)
  • ffmpeg@音视频工具@音视频合并

    FFmpeg中文网 (github.net.cn) FFmpeg 是一款强大的开源跨平台音视频处理工具集,它包含了一系列命令行工具以及用于音频和视频编码解码、格式转换、抓取、流化等功能的库。FFmpeg 支持多种视频、音频格式和编解码器,能够进行音视频的压缩、封装、转码、分割、合并、过滤、抓

    2024年03月17日
    浏览(54)
  • 音视频 FFmpeg音视频处理流程

    推荐一个零声学院项目课,个人觉得老师讲得不错,分享给大家: 零声白金学习卡(含基础架构/高性能存储/golang云原生/音视频/Linux内核) https://xxetb.xet.tech/s/VsFMs

    2024年02月12日
    浏览(44)
  • 音视频 ffmpeg命令提取音视频数据

    保留封装格式 提取视频 提取音频 推荐一个零声学院项目课,个人觉得老师讲得不错,分享给大家: 零声白金学习卡(含基础架构/高性能存储/golang云原生/音视频/Linux内核) https://xxetb.xet.tech/s/VsFMs

    2024年02月10日
    浏览(45)
  • 音视频 ffmpeg视频裁剪

    将输入视频帧的宽度和高度从x和y值表示的位置裁剪到指定的宽度和高度;x和y是输出的左上角坐标,协调系统的中心是输入视频帧的左上角。 如果使用了可选的keep_aspect参数,将会改变输出SAR(样本宽比)以补偿新的DAR(显示长宽比) 推荐一个零声学院项目课,个人觉得老师讲得不

    2024年02月10日
    浏览(38)
  • 如何将抖音API应用于抖音视频的录制和上传

    抖音API允许开发者进行二次开发,使得第三方应用程序可以与抖音进行交互。要将抖音API应用于抖音视频的录制和上传,你需要遵循以下步骤: 获取抖音API密钥:首先,你需要从抖音官网注册一个开发者账号,并创建一个应用以获得API密钥。这个密钥将用于身份验证,确保你

    2024年02月21日
    浏览(33)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包