當前位置:首頁 » 網頁前端 » 姿態遇到腳本發條
擴展閱讀
webinf下怎麼引入js 2023-08-31 21:54:13
堡壘機怎麼打開web 2023-08-31 21:54:11

姿態遇到腳本發條

發布時間: 2023-05-22 05:14:19

㈠ 請問魔獸戰士壓制時自動切換到戰斗姿態的宏怎麼做

建議不做宏,閃避有BUG的,有時候出閃避是不能壓制的,而且和厲害的賊打 一般你切戰斗姿態了 沒狂暴之怒的話就是遭際你 或者聲幾你,不會輕易給你壓,還是靠自己比較好 這個宏沒有什麼意義,操作起來也沒什麼難度,3個姿態自己設3個快捷鍵就行了

㈡ 與姿態、動作相關的數據集介紹

參考:https://blog.csdn.net/qq_38522972/article/details/82953477

姿態論文整理:https://blog.csdn.net/zziahgf/article/details/78203621

經典項目:https://blog.csdn.net/ls83776736/article/details/87991515

姿態識別和動作識別任務本質不一樣,動作識別可以認為是人定位和動作分類任務,姿態識別可理解為關鍵點的檢測和為關鍵點賦id任務(多人姿態識別和單人姿態識別任務)

由於受到收集數據設備的限制,目前大部分姿態數據都是收集公共視頻數據截取得到,因此2D數據集相對來說容易獲取,與之相比,3D數據集較難獲取。2D數據集有室內場景和室外場景,而3D目前只有室內場景。

地址:http://cocodataset.org/#download

樣本數:>= 30W

關節點個數:18

全身,多人,keypoints on 10W people

地址:http://sam.johnson.io/research/lsp.html

樣本數:2K

關節點個數:14

全身,單人

LSP dataset to 10; 000 images of people performing gymnastics, athletics and parkour.

地址:https://bensapp.github.io/flic-dataset.html

樣本數:2W

關節點個數:9

全身,單人

樣本數:25K

全身,單人/多人,40K people,410 human activities

16個關鍵點:0 - r ankle, 1 - r knee, 2 - r hip,3 - l hip,4 - l knee, 5 - l ankle, 6 - l ankle, 7 - l ankle,8 - upper neck, 9 - head top,10 - r wrist,11 - r elbow, 12 - r shoulder, 13 - l shoulder,14 - l elbow, 15 - l wrist

無mask標注

In order to analyze the challenges for fine-grained human activity recognition, we build on our recent publicly available \MPI Human Pose" dataset [2]. The dataset was collected from YouTube videos using an established two-level hierarchy of over 800 every day human activities. The activities at the first level of the hierarchy correspond to thematic categories, such as 」Home repair", 「Occupation", 「Music playing", etc., while the activities at the second level correspond to indivial activities, e.g. 」Painting inside the house", 「Hairstylist" and 」Playing woodwind". In total the dataset contains 20 categories and 410 indivial activities covering a wider variety of activities than other datasets, while its systematic data collection aims for a fair activity coverage. Overall the dataset contains 24; 920 video snippets and each snippet is at least 41 frames long. Altogether the dataset contains over a 1M frames. Each video snippet has a key frame containing at least one person with a sufficient portion of the body visible and annotated body joints. There are 40; 522 annotated people in total. In addition, for a subset of key frames richer labels are available, including full 3D torso and head orientation and occlusion labels for joints and body parts.

為了分析細粒度人類活動識別的挑戰,我們建立了我們最近公開發布的\ MPI Human Pose「數據集[2]。數據集是從YouTube視頻中收集的,使用的是每天800多個已建立的兩級層次結構人類活動。層次結構的第一級活動對應於主題類別,例如「家庭維修」,「職業」,「音樂播放」等,而第二級的活動對應於個人活動,例如「在屋內繪畫」,「發型師」和「播放木管樂器」。總的來說,數據集包含20個類別和410個個人活動,涵蓋比其他數據集更廣泛的活動,而其系統數據收集旨在實現公平的活動覆蓋。數據集包含24; 920個視頻片段,每個片段長度至少為41幀。整個數據集包含超過1M幀。每個視頻片段都有一個關鍵幀,其中至少包含一個人體,其中有足夠的身體可見部分和帶注釋的身體關節。總共有40個; 522個注釋人。此外,對於關鍵幀的子集,可以使用更豐富的標簽,包括全3D軀乾和頭部方向以及關節和身體部位的遮擋標簽。

14個關鍵點:0 - r ankle, 1 - r knee, 2 - r hip,3 - l hip,4 - l knee, 5 - l ankle, 8 - upper neck, 9 - head top,10 - r wrist,11 - r elbow, 12 - r shoulder, 13 - l shoulder,14 - l elbow, 15 - l wrist

不帶mask標注,帶有head的bbox標注

PoseTrack is a large-scale benchmark for human pose estimation and tracking in image sequences. It provides a publicly available training and validation set as well as an evaluation server for benchmarking on a held-out test set (www.posetrack.net).

PoseTrack是圖像序列中人體姿態估計和跟蹤的大規模基準。 它提供了一個公開的培訓和驗證集以及一個評估伺服器,用於對保留的測試集(www.posetrack.net)進行基準測試。

In the PoseTrack benchmark each person is labeled with a head bounding box and positions of the body joints. We omit annotations of people in dense crowds and in some cases also choose to skip annotating people in upright standing poses. This is done to focus annotation efforts on the relevant people in the scene. We include ignore regions to specify which people in the image where ignored ringannotation.

在PoseTrack基準測試中, 每個人都標有頭部邊界框和身體關節的位置 。 我們 在密集的人群中省略了人們的注釋,並且在某些情況下還選擇跳過以直立姿勢對人進行注釋。 這樣做是為了將注釋工作集中在場景中的相關人員上。 我們 包括忽略區域來指定圖像中哪些人在注釋期間被忽略。

Each sequence included in the PoseTrack benchmark correspond to about 5 seconds of video. The number of frames in each sequence might vary as different videos were recorded with different number of frames per second. For the **training** sequences we provide annotations for 30 consecutive frames centered in the middle of the sequence. For the **validation and test ** sequences we annotate 30 consecutive frames and in addition annotate every 4-th frame of the sequence. The rationale for that is to evaluate both smoothness of the estimated body trajectories as well as ability to generate consistent tracks over longer temporal span. Note, that even though we do not label every frame in the provided sequences we still expect the unlabeled frames to be useful for achieving better performance on the labeled frames.

PoseTrack基準測試中包含的 每個序列對應於大約5秒的視頻。 每個序列中的幀數可能會有所不同,因為不同的視頻以每秒不同的幀數記錄。 對於**訓練**序列,我們 提供了以序列中間為中心的30個連續幀的注釋 。 對於**驗證和測試**序列,我們注釋30個連續幀,並且另外注釋序列的每第4個幀。 其基本原理是評估估計的身體軌跡的平滑度以及在較長的時間跨度上產生一致的軌跡的能力。 請注意,即使我們沒有在提供的序列中標記每一幀,我們仍然期望未標記的幀對於在標記幀上實現更好的性能是有用的。

The PoseTrack 2018 submission file format is based on the Microsoft COCO dataset annotation format. We decided for this step to 1) maintain compatibility to a commonly used format and commonly used tools while 2) allowing for sufficient flexibility for the different challenges. These are the 2D tracking challenge, the 3D tracking challenge as well as the dense 2D tracking challenge.

PoseTrack 2018提交文件格式基於Microsoft COCO數據集注釋格式 。 我們決定這一步驟1)保持與常用格式和常用工具的兼容性,同時2)為不同的挑戰提供足夠的靈活性。 這些是2D跟蹤挑戰,3D跟蹤挑戰以及密集的2D跟蹤挑戰。

Furthermore, we require submissions in a zipped version of either one big .json file or one .json file per sequence to 1) be flexible w.r.t. tools for each sequence (e.g., easy visualization for a single sequence independent of others and 2) to avoid problems with file size and processing.

此外,我們要求在每個序列的一個大的.json文件或一個.json文件的壓縮版本中提交1)靈活的w.r.t. 每個序列的工具(例如,單個序列的簡單可視化,獨立於其他序列和2),以避免文件大小和處理的問題。

The MS COCO file format is a nested structure of dictionaries and lists. For evaluation, we only need a subsetof the standard fields, however a few additional fields are required for the evaluation protocol (e.g., a confidence value for every estimated body landmark). In the following we describe the minimal, but required set of fields for a submission. Additional fields may be present, but are ignored by the evaluation script.

MS COCO文件格式是字典和列表的嵌套結構。 為了評估,我們僅需要標准欄位的子集,但是評估協議需要一些額外的欄位(例如,每個估計的身體標志的置信度值)。 在下文中,我們描述了提交的最小但必需的欄位集。 可能存在其他欄位,但評估腳本會忽略這些欄位。

At top level, each .json file stores a dictionary with three elements:

* images

* annotations

* categories

it is a list of described images in this file. The list must contain the information for all images referenced by a person description in the file. Each list element is a dictionary and must contain only two fields: `file_name` and `id` (unique int). The file name must refer to the original posetrack image as extracted from the test set, e.g., `images/test/023736_mpii_test/000000.jpg`.

它是此文件中描述的圖像列表。 該列表必須包含文件中人員描述所引用的所有圖像的信息。 每個列表元素都是一個字典,只能包含兩個欄位:`file_name`和`id`(unique int)。 文件名必須是指從測試集中提取的原始posetrack圖像,例如`images / test / 023736_mpii_test / 000000.jpg`。

This is another list of dictionaries. Each item of the list describes one detected person and is itself a dictionary. It must have at least the following fields:

* `image_id` (int, an image with a corresponding id must be in `images`),

* `track_id` (int, the track this person is performing; unique per frame),`

* `keypoints` (list of floats, length three times number of estimated keypoints  in order x, y, ? for every point. The third value per keypoint is only there for COCO format consistency and not used.),

* `scores` (list of float, length number of estimated keypoints; each value between 0. and 1. providing a prediction confidence for each keypoint),

這是另一個詞典列表。 列表中的每個項目描述一個檢測到的人並且本身是字典。 它必須至少包含以下欄位:

*`image_id`(int,具有相應id的圖像必須在`images`中),

*`track_id`(int,此人正在執行的追蹤;每幀唯一),

`*`keypoints`(浮點數列表, 長度是每個點x,y,?的估計關鍵點數量的三倍 。每個關鍵點的第三個值僅用於COCO格式的一致性而未使用。),

*`得分`(浮點列表,估計關鍵點的長度數;每個值介於0和1之間,為每個關鍵點提供預測置信度),

Human3.6M數據集有360萬個3D人體姿勢和相應的圖像,共有11個實驗者(6男5女,論文一般選取1,5,6,7,8作為train,9,11作為test),共有17個動作場景,諸如討論、吃飯、運動、問候等動作。該數據由4個數字攝像機,1個時間感測器,10個運動攝像機捕獲。

由Max Planck Institute for Informatics製作,詳情可見Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision論文

論文地址:https://arxiv.org/abs/1705.08421

1,單人姿態估計的重要論文

2014----Articulated Pose Estimation by a Graphical Model with ImageDependent Pairwise Relations

2014----DeepPose_Human Pose Estimation via Deep Neural Networks

2014----Joint Training of a Convolutional Network and a Graphical Model forHuman Pose Estimation

2014----Learning Human Pose Estimation Features with Convolutional Networks

2014----MoDeep_ A Deep Learning Framework Using Motion Features for HumanPose Estimation

2015----Efficient Object Localization Using Convolutional Networks

2015----Human Pose Estimation with Iterative Error

2015----Pose-based CNN Features for Action Recognition

2016----Advancing Hand Gesture Recognition with High Resolution ElectricalImpedance Tomography

2016----Chained Predictions Using Convolutional Neural Networks

2016----CPM----Convolutional Pose Machines

2016----CVPR-2016----End-to-End Learning of Deformable Mixture of Parts andDeep Convolutional Neural Networks for Human Pose Estimation

2016----Deep Learning of Local RGB-D Patches for 3D Object Detection and 6DPose Estimation

2016----PAFs----Realtime Multi-Person 2D Pose Estimation using PartAffinity Fields (openpose)

2016----Stacked hourglass----StackedHourglass Networks for Human Pose Estimation

2016----Structured Feature Learning for Pose Estimation

2017----Adversarial PoseNet_ A Structure-aware Convolutional Network forHuman pose estimation (alphapose)

2017----CVPR2017 oral----Realtime Multi-Person 2D Pose Estimation usingPart Affinity Fields

2017----Learning Feature Pyramids for Human Pose Estimation

2017----Multi-Context_Attention_for_Human_Pose_Estimation

2017----Self Adversarial Training for Human Pose Estimation

2,多人姿態估計的重要論文

2016----AssociativeEmbedding_End-to-End Learning for Joint Detection and Grouping

2016----DeepCut----Joint Subset Partition and Labeling for Multi PersonPose Estimation

2016----DeepCut----Joint Subset Partition and Labeling for Multi PersonPose Estimation_poster

2016----DeeperCut----DeeperCut A Deeper, Stronger, and Faster Multi-PersonPose Estimation Model

2017----G-RMI----Towards Accurate Multi-person Pose Estimation in the Wild

2017----RMPE_ Regional Multi-PersonPose Estimation

2018----Cascaded Pyramid Network for Multi-Person Pose Estimation

「級聯金字塔網路用於多人姿態估計」

2018----DensePose: Dense Human Pose Estimation in the Wild

」密集人體:野外人體姿勢估計「(精讀,DensePose有待於進一步研究)

2018---3D Human Pose Estimation in the Wild by Adversarial Learning

「對抗性學習在野外的人體姿態估計」

㈢ 朗誦腳本怎麼寫

朗誦腳本如下:

誦讀腳本:

一、解題

誦讀:念詩文(口語訓練一種)腳本:表演戲刷,曲藝,攝制電影等所依據的本子,裡面記載台詞、故事情節等。

二、重要性

A.存在問題:1.先強後弱2.一個節奏3.不流暢4.過快5.小B.1.通過誦讀可增加語匯句式的儲備。(潛移默化中變為自己語庫中的儲備從而豐富自己的口語表現力,有利於生動精確地表情達意。)

2.培養准確的語感

在語言環境中去學習語(法)、修(辭)、邏(輯),感受祖國語言的豐富性,發展語言的連貫性,領會語言表達的規律性。

3.鍛煉口才

口齒、聲帶,從中學習說話技巧。

三、訓練

1.發音訓練

吐字清晰准確小聲、徐聲發音法。假聲?

練習:拗口令(練舌實,咬字)。

2.朗誦指導:

1.音質—一發音的質地(有訓練,也有先天因素)(悼詞:渾厚。童話故事:清脆親切)要優美動聽純凈無雜質(如:北京女孩說「小」帶舌尖音)。

2.音調一一交低粗細來表達豐富的感情(抑揚頓挫)>表強調的重。

3.音律一—節奏快慢。

a.行頓:看標點看分段空行意思停頓文法停頓修辭停頓b.速度:記敘文(快)散文緊張場面焦急心情。

議論文(慢)詩歌平靜場面優美景物心情沉重C.音力:聲音的力度(大聲、持久、強韌)。

3.表情指導

姿態大方,腹部運氣,均勻呼吸,表情自然。

四、腳本寫法

1.選中一段(部分),寫好課題—一每課不一樣重點。

2.概括該段內容(寫了什麼?)。

3.總體誦讀應帶著什麼感情,用怎樣的語氣,語速,語調,音質。

4.重點分析某些詞句:該怎樣讀。