Llama 3.2-11B-vision多模态大模型结构详解(精确到各个算子)——图片预处理的详细步骤
Llama 3.2-11B-vision多模态大模型结构和推理流程详解(精确到各个算子)——图片预处理的详细步骤,通俗易懂理解Llama 3.2-11B-vision多模态大模型
自从去年Meta发布了首个开源Llama3.2Llama 3.2-11B-vision多模态大模型,然而,市面上几乎没有blog研究其结构的具体构造,让人对其原理和结构都会产生不同程度的困惑,不利于对大模型的学习,本系列blog将从头开始一步一步详细地讲解这个多模态大模型,而不会对某个步骤含糊其辞。本blog教程十分适合对大模型的小白。本系列的目录为:
- 图片预处理的详细步骤(即本文blog)
- 文本预处理的详细步骤(敬请期待……)
- 视觉编码器的详细结构和步骤(敬请期待……)
- 文本编码器的详细结构和步骤(敬请期待……)
- 文本交融的详细结构和步骤(敬请期待……)
- 输出的详细结构和步骤(敬请期待……)
- Llama 3.2-11B-vision多模态大模型推理完整超清流程图(本系列blog彩蛋^_^敬请期待……)
Llama 3.2-11B-vision图片预处理的详细步骤
0. Llama 3.2-11B-vision多模态大模型推理代码
下方为官方提供的完整推理代码,我选取了一个灰度图片,即MNIST数据集中的一张28X28的图片,上面切了一部分,为28X24的大小图片。该图用PIL读入后,实际为28*24的一个矩阵,矩阵中的每个值均为无符号的8位整数,范围在0~255之间,随后传入了processor函数,即本文将要详细和重点介绍的一个算法。其余均为文本预处理部分将在下期详细讲解。
# This is a sample Python script.
# Press Shift+F10 to execute it or replace it with your code.
# Press Double Shift to search everywhere for classes, files, tool windows, actions, and settings.
#%%
import requests
import torch
from PIL import Image
from transformers import MllamaForConditionalGeneration, AutoProcessor
from time import time
import numpy as np
import pandas as pd
model_dir = "./models/llama3.2_11b"
model = MllamaForConditionalGeneration.from_pretrained(
model_dir,
torch_dtype=torch.bfloat16,
device_map="cuda:0",
)
model.tie_weights()
# 这里是初始化的过程,会识别分词器以及image分片的类型
processor = AutoProcessor.from_pretrained(model_dir)
# Press the green button in the gutter to run the script.
if __name__ == '__main__':
url = "https://www.modelscope.cn/models/LLM-Research/Llama-3.2-11B-Vision/resolve/master/rabbit.jpg"
while 1:
# image = Image.open("./data/1995.jpg") # 图片路径
image = Image.open("./data/000000.png")
# image = Image.open("./data/bicycle_bigger.png")
image_array = np.array(image) # 这个没什么用,每个数值在0~255之间,8位无符号整数
query = "图中的数字是几?"
# query = "图中的交通工具是什么?"
# query = "图中的人在干嘛?"
messages = [
{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": query} # 填入问题
]}
]
s1 = time()
input_text = processor.apply_chat_template(messages, add_generation_prompt=True)
# 上面就是套一个text模版,可以本地执行
# 这里应该是call的过程,就是调用的过程
inputs = processor(image, input_text, return_tensors="pt").to(model.device)
for k in inputs.data: # 保存输出值下来!
save_tensor_to_txt(inputs.data[k], f'./data/tensor_output_0{k}.txt')
print(processor.decode(inputs["input_ids"][0]))
# 上面这个融合是比较麻烦的地方
output = model.generate(**inputs, max_new_tokens=1000)
print(time() - s1)
print(processor.decode(output[0]))
# See PyCharm help at https://www.jetbrains.com/help/pycharm/
1. 图片预处理代码概览
下方的代码是官方提供的代码示例框架,本文输入的11100.png即为上面提到的28X24的灰度图片,彩色图片同理,它们均会被整合为三个通道的像素图形式,故整个过程和原理都是一样的。本文用简单灰度图是为了便于讲解。下方的MllamaImageProcessor类的preprocess方法是本文重点的关注和讲解对象。
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
@文件:image_prerocess.py
@时间:2025/1/6 下午11:39
@作者:料理码王
@邮箱:1289590668@qq.com
@功能:输入一个28*24的灰度image,然后进行预处理!
"""
from PIL import Image
import numpy as np
from transformers import MllamaImageProcessor
# 加载图像
image_path = "./data/11100.png"
image = Image.open(image_path)
# 创建 MllamaImageProcessor 实例
# 配置MllamaImageProcessor
image_processor = MllamaImageProcessor(
do_convert_rgb=True, # 灰度图直接每个像素点复制三次为一个劣列表
do_normalize=True, # 三个通道分别减去均值,除以方差
do_pad=True,
do_rescale=True,
do_resize=True,
image_mean=[0.48145466, 0.4578275, 0.40821073], # 这是个预设好的常量
image_std=[0.26862954, 0.26130258, 0.27577711], # 这是个预设好的常量
max_image_tiles=4,
resample=2, # Resample value 2 corresponds to 'PILImageResampling.BILINEAR' 图像重缩放的比例因子,双线性插值法缩放图像
rescale_factor=0.00392156862745098, # 1/255,应该是根据单像素的范围确定的
size={"height": 560, "width": 560}
)
# 调用预处理方法
processed_image = image_processor.preprocess(
images=image,
return_tensors="pt", # 返回NumPy数组
)
# 获取预处理后的pixel_values和其他信息
pixel_values = processed_image["pixel_values"]
aspect_ratio_ids = processed_image["aspect_ratio_ids"]
aspect_ratio_mask = processed_image["aspect_ratio_mask"]
num_tiles = processed_image["num_tiles"]
# 输出结果(或者你可以进一步使用这些处理后的数据)
print("Pixel Values Shape:", pixel_values.shape)
print("Aspect Ratio IDs:", aspect_ratio_ids)
print("Aspect Ratio Mask:", aspect_ratio_mask)
print("Number of Tiles:", num_tiles)
2. MllamaImageProcessor类
下方是这个类的起始部分,会包含各种需要输入的参数,此部分源码均又对各个参数进行细致的讲解,可使用文心一言等大模型详细获取具体各个参数的作用。此外,初始化部分主要也是对输入的参数进行验证,以免不同的参数之间发生冲突。
class MllamaImageProcessor(BaseImageProcessor):
"""
Constructs a Mllama image processor.
Args:
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB. This is useful if the input image is of a different format e.g. RGBA.
Only has an effect if the input image is in the PIL format.
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image.
size (`Dict[str, int]`, *optional*, defaults to `self.size`):
Size of the image tile. Should be a dictionary containing 'height' and 'width' keys, both with integer values.
The height and width values should be equal.
resample (`int`, *optional*, defaults to `Resampling.BILINEAR`):
Resampling filter to use if resizing the image. This can be one of the enum `PILImageResampling`. Only
has an effect if `do_resize` is set to `True`.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image.
rescale_factor (`float`, *optional*, defaults to 0.0):
Rescale factor to rescale the image by if `do_rescale` is set to `True`.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image.
image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
Image mean to use for normalization. Only has an effect if `do_normalize` is set to `True`.
image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
Image standard deviation to use for normalization. Only has an effect if `do_normalize` is set to
`True`.
do_pad (`bool`, *optional*, defaults to `True`):
Whether or not to pad the images to the largest height and width in the batch.
max_image_tiles (`int`, *optional*, defaults to 4):
The maximum number of tiles to split the image into.
"""
model_input_names = ["pixel_values", "num_tiles", "aspect_ratio_ids", "aspect_ratio_mask"]
def __init__(
self,
do_convert_rgb: bool = True,
do_resize: bool = True,
size: Optional[Dict[str, int]] = None,
resample: PILImageResampling = PILImageResampling.BILINEAR,
do_rescale: bool = True,
rescale_factor: float = 1 / 255, # 这个也是根据单像素的颜色范围预设好的
do_normalize: bool = True,
image_mean: Optional[Union[float, List[float]]] = None,
image_std: Optional[Union[float, List[float]]] = None,
do_pad: bool = True,
max_image_tiles: int = 4,
**kwargs,
) -> None:
super().__init__(**kwargs)
self.do_convert_rgb = do_convert_rgb
self.do_resize = do_resize
self.size = size if size is not None else {"height": 224, "width": 224}
self.resample = resample
self.do_rescale = do_rescale
self.rescale_factor = rescale_factor
self.do_normalize = do_normalize
self.image_mean = image_mean if image_mean is not None else IMAGENET_STANDARD_MEAN
self.image_std = image_std if image_std is not None else IMAGENET_STANDARD_STD
self.do_pad = do_pad
self.max_image_tiles = max_image_tiles
_validate_mllama_preprocess_arguments(self.do_resize, self.size, self.do_pad, self.max_image_tiles)
def preprocess(
self,
images: ImageInput,
do_convert_rgb: Optional[bool] = None,
do_resize: Optional[bool] = None,
size: Optional[Dict[str, int]] = None,
resample: Optional[PILImageResampling] = None,
do_rescale: Optional[bool] = None,
rescale_factor: Optional[float] = None,
do_normalize: Optional[bool] = None,
image_mean: Optional[Union[float, List[float]]] = None,
image_std: Optional[Union[float, List[float]]] = None,
do_pad: Optional[bool] = None,
max_image_tiles: Optional[int] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
return_tensors: Optional[Union[str, TensorType]] = None,
):
"""
Preprocess a batch of images.
Args:
images (`ImageInput`):
A list of images to preprocess.
do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`):
Whether to convert the image to RGB.
do_resize (`bool`, *optional*, defaults to `self.do_resize`):
Whether to resize the image.
size (`Dict[str, int]`, *optional*, defaults to `self.size`):
Size of the image tile. Should be a dictionary containing 'height' and 'width' keys, both with integer values.
The height and width values should be equal.
resample (`int`, *optional*, defaults to `self.resample`):
Resampling filter to use if resizing the image. This can be one of the enum `PILImageResampling`. Only
has an effect if `do_resize` is set to `True`.
do_rescale (`bool`, *optional*, defaults to `self.do_rescale`):
Whether to rescale the image.
rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`):
Rescale factor to rescale the image by if `do_rescale` is set to `True`.
do_normalize (`bool`, *optional*, defaults to `self.do_normalize`):
Whether to normalize the image.
image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
Image mean to use for normalization. Only has an effect if `do_normalize` is set to `True`.
image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
Image standard deviation to use for normalization. Only has an effect if `do_normalize` is set to
`True`.
do_pad (`bool`, *optional*, defaults to `self.do_pad`):
Whether or not to pad the images to the largest height and width in the batch.
max_image_tiles (`int`, *optional*, defaults to `self.max_image_tiles`):
The maximum number of tiles to split the image into.
input_data_format (`ChannelDimension` or `str`, *optional*):
The channel dimension format for the input image. If unset, the channel dimension format is inferred
from the input image. Can be one of:
- `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
- `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
- `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
return_tensors (`str` or `TensorType`, *optional*):
The type of tensors to return. Can be one of:
- Unset: Return a list of `np.ndarray`.
- `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`.
- `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
- `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
- `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`.
Returns:
`BatchFeature` of the following structure:
- **pixel_values** (`TensorType`): The preprocessed pixel values.
- **aspect_ratio_ids** (`TensorType`): The aspect ratio ids of the images.
- **num_tiles** (`List[List[int]]`): The number of tiles for each image in the batch.
"""
do_convert_rgb = do_convert_rgb if do_convert_rgb is not None else self.do_convert_rgb
do_resize = do_resize if do_resize is not None else self.do_resize
size = size if size is not None else self.size
resample = resample if resample is not None else self.resample
do_rescale = do_rescale if do_rescale is not None else self.do_rescale
rescale_factor = rescale_factor if rescale_factor is not None else self.rescale_factor
do_normalize = do_normalize if do_normalize is not None else self.do_normalize
image_mean = image_mean if image_mean is not None else self.image_mean
image_std = image_std if image_std is not None else self.image_std
do_pad = do_pad if do_pad is not None else self.do_pad
max_image_tiles = max_image_tiles if max_image_tiles is not None else self.max_image_tiles
validate_preprocess_arguments(
do_rescale=do_rescale,
rescale_factor=rescale_factor,
do_normalize=do_normalize,
image_mean=image_mean,
image_std=image_std,
do_resize=do_resize,
size=size,
resample=resample,
)
# extra validation
_validate_mllama_preprocess_arguments(do_resize, size, do_pad, max_image_tiles)
3. 转换为RGB
- 如果
do_convert_rgb=True:- 每个灰度图像的像素值将会被复制三次,生成三个相同的通道,形成一个3通道的RGB图像。这是为了使灰度图适配RGB图像的处理方式。
- 例如,28×24的灰度图会变成一个形状为
(28, 24, 3)的图像。
# 1. 就是把一个图片包成一个列表,包了两次
images_list = make_list_of_images(images)
# 2.对于黑白图片,这里就是将每个点复制三个,如151--->[151,151,151]
if self.do_convert_rgb:
# 第一个for先遍历每句话,然后第二个for遍历一句话中的多张图
images_list = [[convert_to_rgb(image) for image in images] for images in images_list]
4. 转换为NumPy数组
- 使用
to_numpy_array(image)将图像对象转换为numpy.ndarray,这样图像的数值可以进行后续处理。此时图像的形状应该是(28, 24, 3),即每个像素点有三个通道值。
# 3.此处将图片对象转化为像素点的具体值,返回numpy的array类型,28*24*3
images_list = [[to_numpy_array(image) for image in images] for images in images_list]
5. 通道格式调整
to_channel_dimension_format:- 由于Llama模型使用的输入格式为“channels_first”格式(即
(channels, height, width)),此步骤将图片的形状从(28, 24, 3)转换为(3, 28, 24)。 - 对于灰度图,复制的RGB通道的数据将保持一致,所以此时图像的通道数为3,且每个通道都包含相同的28×24像素。
- 由于Llama模型使用的输入格式为“channels_first”格式(即
batch_images = []
batch_aspect_ratios = []
# iterate over batch samples
for images in images_list:
sample_images = []
sample_aspect_ratios = []
# iterate over images in a batch sample
for image in images:
# convert images to channels first format for faster processing
# LAST is slower for `pad` and not supported by `split_to_tiles`
data_format = ChannelDimension.FIRST # 第一维表示通道
# 4. 维度从28*24*3转化为3*28*24,由于是灰度图,故三个方阵的值都一样
image = to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format)
6. 图片调整大小(Resize)
resize:- 在该步骤中,图像会被调整为指定的尺寸。假设默认大小是
560x560,即图像会被调整为(3, 560, 560),其中使用双线性插值进行缩放。 - 如果图像的宽高比被修改了,
aspect_ratio会被计算并保存。宽高比会影响后续的填充和切割。
- 在该步骤中,图像会被调整为指定的尺寸。假设默认大小是
# do_resize=False is not supported, validated
# 5. resize,使用双线性插值算法将3*28*24的三层都分别resize,转化为3*480*560的图像
image, aspect_ratio = self.resize(
image=image,
size=size,
resample=resample,
max_image_tiles=max_image_tiles,
input_data_format=data_format,
data_format=data_format,
)
7. 填充(Padding)
pad:- 如果
do_pad=True,在调整大小后,图像会根据最大宽高比进行填充。填充会将图像补全至一个统一大小,比如调整为(3, 560, 560)或其他大于原始尺寸的大小。填充操作的具体尺寸和效果由size和aspect_ratio决定。
- 如果
# do_pad=False is not supported, validated
# 6. 转成八种大小的一种,这里是1*1个tile,即3*560*560,第480行开始全部补0
image = self.pad(
image=image,
size=size,
aspect_ratio=aspect_ratio, # 型号为1*1的tile(总共八种类型)
input_data_format=data_format,
data_format=data_format,
)
8. 图片归一化与重缩放
-
do_rescale=True:- 图像像素值会通过一个
rescale_factor进行缩放,默认值为1/255。这意味着每个像素值将除以255,从而将其归一化至[0, 1]范围内。 - 如果
do_rescale=False,则跳过此步骤。
- 图像像素值会通过一个
-
do_normalize=True:- 在归一化之后,图像会根据
image_mean和image_std进行标准化。假设使用的是 ImageNet 标准值,这意味着每个通道的均值和标准差会被用于对像素进行归一化处理。最终图像的每个通道将减去均值并除以标准差,以确保其均值为0,标准差为1。
- 在归一化之后,图像会根据
# 7. 这里就是直接把3*560*560中的每个元素*1/255
if do_rescale:
image = self.rescale(
image=image,
scale=rescale_factor, # 1/255
input_data_format=input_data_format,
data_format=data_format,
)
# 8. 这里就是直接把3*560*560中的每个元素减去对应通道的均值,再除以标准差
if do_normalize:
image = self.normalize(
image=image,
mean=image_mean,
std=image_std,
input_data_format=input_data_format,
data_format=data_format,
)
9. 切割图像(Split to Tiles)
split_to_tiles:- 在图像被调整和填充后,如果图像的尺寸仍然大于一个小块(例如,560×560的图像),它会被拆分为多个切割块(tiles)。这个过程会将图像拆分成不超过
max_image_tiles=4的小块。假设max_image_tiles=4,图像可能会被拆分成多个尺寸为(3, 560, 560)的图块。
- 在图像被调整和填充后,如果图像的尺寸仍然大于一个小块(例如,560×560的图像),它会被拆分为多个切割块(tiles)。这个过程会将图像拆分成不超过
# 1*1的tile,这俩值都是1
num_tiles_height, num_tiles_width = aspect_ratio
# 9. 1*1的tile,就会自动提升一个维度,即为1 * 3*560*560
image = split_to_tiles(image, num_tiles_height, num_tiles_width)
sample_images.append(image)
sample_aspect_ratios.append((num_tiles_height, num_tiles_width))
batch_images.append(sample_images)
batch_aspect_ratios.append(sample_aspect_ratios)
10. 计算宽高比ID(Aspect Ratio ID)
convert_aspect_ratios_to_ids:- 它会将每个图像的宽高比转换为一个唯一的ID。在这里,它会返回一个由
aspect_ratio_ids构成的数组,表示每个图像的宽高比。
- 它会将每个图像的宽高比转换为一个唯一的ID。在这里,它会返回一个由
# 10. 此处进行打包,下面举例说明:
# batch_images = [
# [np.random.rand(1, 3, 560, 560)], # 第一个样本,有 1 张图像,并且该图像被分割成了 1 个切割块,每个切割块有 3 个通道
# [np.random.rand(1, 3, 560, 560), np.random.rand(2, 3, 560, 560)], # 第二个样本,有 1 张图像和 2 张图像,其中第二张图像被分割成了 2 个切割块
# ]
# max_image_tiles = 4
# images 的最终形状为 (2, 2, 4, 3, 560, 560):表示两个样本,每个样本最多两个图像,每张图像最多4个切割块,每个切割块有 3 个通道。每个切割块高度宽度都是560
# 第一个样本的图像块被填充到 images[0, 0],其余部分(如 images[0, 1])会是零
# 第二个样本的第一张图像被填充到 images[1, 0],第二张图像的 2 个切割块被填充到 images[1, 1]。
# all_num_tiles 为 [[1], [1, 2]],表示第一个样本有 1 张图像且有 1 个切割块,第二个样本有 2 张图像,第一张图像有 1 个切割块,第二张图像有 2 个切割块。
images, num_tiles = pack_images(batch_images, max_image_tiles)
# 举例说明convert_aspect_ratios_to_ids函数:
# max_image_tiles = 4的时候,可能的宽高比为[(1, 1), (1, 2), (1, 3), (1, 4), (2, 1), (2, 2), (3, 1), (4, 1)]
# 假设有:aspect_ratios = [
# [(1, 1)], # 第一个样本:1 张图像,宽高比为 (1, 1)
# [(1, 2), (2, 1)], # 第二个样本:2 张图像,宽高比分别为 (1, 2) 和 (2, 1)
# ]
# 首先,通过 get_all_supported_aspect_ratios(4) 计算出所有支持的宽高比即上面第二行的八种情况。
# 最终,返回的 aspect_ratios_ids 数组为:array([[1], [2, 5]]),其实就是宽高比的id
aspect_ratio_ids = convert_aspect_ratios_to_ids(batch_aspect_ratios, max_image_tiles=max_image_tiles)
11. 构建宽高比掩码(Aspect Ratio Mask)
build_aspect_ratio_mask:- 根据每个图像的宽高比(aspect ratio),构建一个掩码,表示每个图像切割块的有效性。如果一个图像被切割成多个块,那么掩码会指示每个块是否有效。
# 举例说明build_aspect_ratio_mask的作用:
# 例如,如果一个图像的宽高比是 (2, 3),表示该图像有 2 行 3 列的切割块(共 6 个切割块),那么会将掩码中的前 6 个切割块设置为 1。
# 再例:假设aspect_ratios = [ [(1, 1)], # 第一个样本:1 张图像,宽高比为 (1, 1)
# [(1, 2), (2, 2)]], # 第二个样本:2 张图像,宽高比分别为 (1, 2) 和 (2, 2)
# max_image_tiles = 4
# 初始状态:aspect_ratio_mask = np.zeros((2, 2, 4), dtype=np.int64)
# 设置第一个切割块:aspect_ratio_mask[:, :, 0] = 1,结果为:aspect_ratio_mask = [[[1, 0, 0, 0], [1, 0, 0, 0]], [[1, 0, 0, 0], [1, 0, 0, 0]]]
# 设置第二个切割块后,最终为:aspect_ratio_mask = [[[1, 0, 0, 0], [0, 0, 0, 0]], [[1, 1, 0, 0], [1, 1, 1, 1]]]
aspect_ratio_mask = build_aspect_ratio_mask(batch_aspect_ratios, max_image_tiles=max_image_tiles)
12. 最终数据打包
BatchFeature:- 所有处理过的图像和相关的元数据(如宽高比ID、宽高比掩码)会被打包成一个
BatchFeature对象,方便后续的处理和模型输入。 - 这里的
encoded_inputs包括pixel_values(经过预处理的像素值)、aspect_ratio_ids(宽高比ID)、aspect_ratio_mask(宽高比掩码),以及其他必要的元数据。
- 所有处理过的图像和相关的元数据(如宽高比ID、宽高比掩码)会被打包成一个
# images (np.ndarray) with shape (batch_size, max_num_images, max_image_tiles, channels, tile_height, tile_width)
# aspect_ratio_ids (np.ndarray) with shape (batch_size, max_num_images) - aspect ratio ids for each image, padded to max_num_images with 0
# num_tiles (List[List[int]]) with (batch_size, num_images_in_batch) - real number of tiles for each image, not padded
# aspect_ratio_mask (np.ndarray) with shape (batch_size, max_num_images, max_image_tiles) - number of tiles for each image, padded to max_num_images with 0
# 下面就是组装为一个字典
encoded_inputs = BatchFeature(
data={
"pixel_values": images,
"aspect_ratio_ids": aspect_ratio_ids,
"aspect_ratio_mask": aspect_ratio_mask,
},
tensor_type=return_tensors,
)
encoded_inputs["num_tiles"] = num_tiles
return encoded_inputs
13. 输入与输出总结
- 输入:28×24的灰度图像。
- 输出:经过一系列预处理后,得到符合Llama Vision模型的输入格式。最终输出的
BatchFeature(本质其实就是一个字典)包括:- pixel_values:预处理后的图像数据(形状为
(batch_size, max_num_images, max_image_tiles, channels, tile_height, tile_width))。 - aspect_ratio_ids:每个图像的宽高比ID。
- aspect_ratio_mask:宽高比掩码,用于标识每个切割块是否有效。
- num_tiles:每个图像的切割块数量。
- pixel_values:预处理后的图像数据(形状为
假设输入是一个大小为28×24的灰度图,经过上述处理后,最终图像被转换为符合Llama Vision模型要求的形状,如 (batch_size, max_num_images, max_image_tiles, 3, 560, 560)。
# 获取预处理后的pixel_values和其他信息
pixel_values = processed_image["pixel_values"]
aspect_ratio_ids = processed_image["aspect_ratio_ids"]
aspect_ratio_mask = processed_image["aspect_ratio_mask"]
num_tiles = processed_image["num_tiles"]
# 输出结果(或者你可以进一步使用这些处理后的数据)
print("Pixel Values Shape:", pixel_values.shape)
print("Aspect Ratio IDs:", aspect_ratio_ids)
print("Aspect Ratio Mask:", aspect_ratio_mask)
print("Number of Tiles:", num_tiles)
14. image_processing_mllama.py的完整源代码
下方是官方提供的完整版的原始版本的代码
# coding=utf-8
# Copyright 2024 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
from functools import lru_cache
from typing import Dict, List, Optional, Tuple, Union
import numpy as np
from ...image_processing_utils import BaseImageProcessor, BatchFeature
from ...image_transforms import (
PaddingMode,
get_image_size,
pad,
resize,
)
from ...image_utils import (
IMAGENET_STANDARD_MEAN,
IMAGENET_STANDARD_STD,
ChannelDimension,
ImageInput,
PILImageResampling,
infer_channel_dimension_format,
is_valid_image,
is_vision_available,
to_numpy_array,
validate_preprocess_arguments,
)
from ...utils import TensorType, logging
if is_vision_available():
import PIL
from PIL import Image
logger = logging.get_logger(__name__)
@lru_cache(maxsize=10)
def get_all_supported_aspect_ratios(max_image_tiles: int) -> List[Tuple[int, int]]:
"""
Computes all allowed aspect ratios for a given maximum number of input tiles.
This function calculates all possible arrangements of tiles that can be formed
within the constraint of the maximum number of tiles. Each arrangement is
represented by its aspect ratio (width/height) and the corresponding tile configuration.
Args:
max_image_tiles (`int`):
The maximum number of tiles allowed.
Returns:
`List[Tuple[int, int]]`: A list of tuples, each tuple representing a valid (width, height)
configuration in terms of number of tiles.
Example:
>>> get_all_supported_aspect_ratios(4)
[(1, 1), (1, 2), (1, 3), (1, 4), (2, 1), (2, 2), (3, 1), (4, 1)]
"""
aspect_ratios = []
for width in range(1, max_image_tiles + 1):
for height in range(1, max_image_tiles + 1):
if width * height <= max_image_tiles:
aspect_ratios.append((width, height))
return aspect_ratios
def get_image_size_fit_to_canvas(
image_height: int,
image_width: int,
canvas_height: int,
canvas_width: int,
tile_size: int,
) -> Tuple[int, int]:
"""
Calculates the new size of an image to fit within a canvas while maintaining aspect ratio.
This function calculates the optimal size for an image to fit within a canvas defined by
canvas_height and canvas_width, while ensuring that the image dimensions are not smaller than
tile_size. If the image is larger than the canvas, the returned size will fit within the canvas.
If the image already fits within the canvas, the size remains unchanged.
The aspect ratio of the original image is preserved.
Args:
image_height (`int`):
The height of the original image.
image_width (`int`):
The width of the original image.
canvas_height (`int`):
The height of the canvas.
canvas_width (`int`):
The width of the canvas.
tile_size (`int`):
The tile size.
Returns:
`Tuple[int, int]`: A tuple containing the new height and width of the image.
"""
# Set target image size in between `tile_size` and canvas_size
target_width = np.clip(image_width, tile_size, canvas_width)
target_height = np.clip(image_height, tile_size, canvas_height)
scale_h = target_height / image_height
scale_w = target_width / image_width
if scale_w < scale_h:
new_width = target_width
new_height = min(math.floor(image_height * scale_w), target_height)
else:
new_height = target_height
new_width = min(math.floor(image_width * scale_h), target_width)
return new_height, new_width
@lru_cache(maxsize=100)
def get_optimal_tiled_canvas(
image_height: int,
image_width: int,
max_image_tiles: int,
tile_size: int,
) -> Tuple[int, int]:
r"""
Determines the best canvas based on image and tile size and maximum number of tiles.
First, calculates possible resolutions based on the maximum number of tiles and tile size.
For example for max_image_tiles=2, tile_size=100, possible tile arrangements are:
[(1, 1), (1, 2), (2, 1)] and corresponding canvas sizes are:
[(100, 100), (100, 200), (200, 100)]
For each possible resolution, calculates the scaling factors for
width and height, and selects the smallest one, which is the limiting side.
E.g. to match the canvas you can upscale height by 2x, and width by 1.5x,
therefore, the maximum upscaling you can do is min(2, 1.5) = 1.5.
If upscaling is possible (any of the scaling factors is greater than 1),
then picks the smallest upscaling factor > 1.
If upscaling is not possible, then picks the largest scaling factor <= 1, i.e.
reduce downscaling as much as possible.
If there are multiple resolutions with the same max scale, we pick the one with the lowest area,
to minimize padding. E.g., the same image can be upscaled to 224x224 and 224x448, but the latter
has more padding.
Example of canvases made from tiles:
To visualize how the image can fit onto different tile grids, let's try fitting an ASCII cat into the tiles.
Here's an ASCII cat image you want to fit into the tiles:
/\_/\
( o.o )
> ^ <
If `num_tiles=6`, possible tile grids would look like this:
**2x3 Canvas (2 tiles wide, 3 tiles tall)**: -> total of 6 tiles
+-------+-------+
| /\_/\ | 0 | <- Cat image split across two tiles horizontally
+-------+-------+
| > ^ < | 0 | <- Remaining part of the cat occupies the left tile
+-------+-------+
|( o.o )| 0 |
+-------+-------+
**3x2 Canvas (3 tiles wide, 2 tiles tall)**: -> total of 6 tiles
+-------+-------+-------+
| /\_/\ |( o.o )| 0 | <- Cat image occupies the first two tiles, 1 tile remains empty
+-------+-------+-------+
| > ^ < | 0 | 0 | <- Remaining part of the cat occupies the left tile
+-------+-------+-------+
**1x6 Canvas (1 tile wide, 6 tiles tall)**: -> total of 6 tiles
+-------+
| /\_/\ | <- Top part of the cat
+-------+
|( o.o )| <- Middle part of the cat
+-------+
| > ^ < | <- Bottom part of the cat
+-------+
| 0 |
+-------+
| 0 |
+-------+
| 0 |
+-------+
Given that the tiles you get depend on the chosen aspect ratio, you have to add
embedding in the modeling code to help it know if it got a 3x2 or a 1x6 or a 2x3
aspect ratio.
The function tests these arrangements to find the smallest canvas where the image fits.
If multiple canvases fit, it selects the one where the dimensions are closest to the image size.
In this case the first canvas is the closest to the original image.
You then feed all of the tiles to the model:
+-------+-------+-------+-------+-------+-------+
- | /\_/\ |( o.o )| > ^ < | 0 | 0 | 0 | <- Last canvas
+-------+-------+-------+-------+-------+-------+
+-------+-------+-------+-------+-------+-------+
- | /\_/\ | 0 |( o.o )| 0 | > ^ < | 0 | <- First canvas
+-------+-------+-------+-------+-------+-------+
+-------+-------+-------+-------+-------+-------+
- | /\_/\ |( o.o )| 0 | > ^ < | 0 | 0 | <- second canvas
+-------+-------+-------+-------+-------+-------+
For each tile, you have num_channels (usually RGB so 3), tile_width, tile_height
Args:
image_height (`int`):
The height of the image.
image_width (`int`):
The width of the image.
max_image_tiles (`int`):
The maximum number of tiles any image can be split into.
tile_size (`int`):
The tile size.
Returns:
`Tuple[int, int]`: The best canvas resolution [height, width] for the given image.
"""
possible_tile_arrangements = get_all_supported_aspect_ratios(max_image_tiles)
possible_canvas_sizes = np.array(possible_tile_arrangements) * tile_size
# get all possible resolutions heights/widths
target_heights, target_widths = np.array(possible_canvas_sizes).T
# get scaling factors to resize the image without distortion
scale_h = target_heights / image_height
scale_w = target_widths / image_width
# get the min scale between width and height (limiting side -> no distortion)
scales = np.where(scale_w > scale_h, scale_h, scale_w)
# filter only scales that allow upscaling
upscaling_options = scales[scales >= 1]
if len(upscaling_options) > 0:
selected_scale = np.min(upscaling_options)
else:
# no upscaling possible,
# get the minimum downscaling (max scale for scales<1)
downscaling_options = scales[scales < 1]
selected_scale = np.max(downscaling_options)
# get all resolutions that support this scaling factor,
# e.g. you can upscale to 224x224, 224x448, 224x672 without distortion
chosen_canvas = possible_canvas_sizes[scales == selected_scale]
# if there are multiple resolutions,
# get the one with minimum area to reduce padding
if len(chosen_canvas) > 1:
areas = chosen_canvas[:, 0] * chosen_canvas[:, 1]
optimal_idx = np.argmin(areas)
optimal_canvas = chosen_canvas[optimal_idx]
else:
optimal_canvas = chosen_canvas[0]
return optimal_canvas
def split_to_tiles(image: np.ndarray, num_tiles_height: int, num_tiles_width: int) -> np.ndarray:
"""
Split an image into a specified number of tiles along its width and height dimensions.
Args:
image (`np.ndarray`):
Input image with shape (num_channels, height, width).
num_tiles_height (`int`):
Number of tiles to split the image into along its height.
num_tiles_width (`int`):
Number of tiles to split the image into along its width.
Returns:
`np.ndarray`:
Array of image tiles with shape (num_tiles_width * num_tiles_height, num_channels, tile_height, tile_width).
"""
num_channels, height, width = image.shape
tile_height = height // num_tiles_height
tile_width = width // num_tiles_width
image = image.reshape(num_channels, num_tiles_height, tile_height, num_tiles_width, tile_width)
# Permute to (num_tiles_height, num_tiles_width, num_channels, tile_height, tile_width)
image = image.transpose(1, 3, 0, 2, 4)
# Reshape into the desired output shape (num_tiles_width * num_tiles_height, num_channels, tile_height, tile_width)
image = image.reshape(num_tiles_width * num_tiles_height, num_channels, tile_height, tile_width)
return np.ascontiguousarray(image)
def build_aspect_ratio_mask(aspect_ratios: List[List[Tuple[int, int]]], max_image_tiles: int) -> np.ndarray:
"""
Builds a mask for the aspect ratios of the images.
Args:
aspect_ratios (`List[List[Tuple[int, int]]]`):
A list of lists containing aspect ratios for each image in the batch.
Each aspect ratio is represented as a tuple of (width, height) in terms of number of tiles.
max_image_tiles (`int`):
The maximum number of tiles any image can be split into.
Returns:
`np.ndarray`: A 3D numpy array of shape (batch_size, max_num_images, max_image_tiles).
The mask contains 1s for valid tiles and 0s for padding.
"""
batch_size = len(aspect_ratios)
max_num_images = max([len(row) for row in aspect_ratios])
aspect_ratio_mask = np.zeros((batch_size, max_num_images, max_image_tiles), dtype=np.int64)
# Set the first tile to 1 for all aspect ratios
# because in original implementation aspect ratios are padded with (1, 1),
# but original code examples are not built to handle batches, so we might remove it later
aspect_ratio_mask[:, :, 0] = 1
# Set the aspect ratio mask for the rest of the tiles
for i, sample_aspect_ratios in enumerate(aspect_ratios):
for j, (num_tiles_w, num_tiles_h) in enumerate(sample_aspect_ratios):
aspect_ratio_mask[i, j, : num_tiles_w * num_tiles_h] = 1
return aspect_ratio_mask
def pack_images(
batch_images: List[List[np.ndarray]],
max_image_tiles: int,
) -> Tuple[np.ndarray, List[List[int]]]:
"""
Stack a list of lists of images with variable lengths into a numpy array, applying zero padding as needed.
Each list in the input represents a batch sample, and each image within a list is expected to be
pre-split into tiles. The resulting array will have a shape of
(batch_size, max_num_images, max_image_tiles, channels, tile_height, tile_width).
Args:
batch_images (`List[List[np.ndarray]]`):
A list of lists of image tiles. Each inner list represents
a batch sample containing multiple images, where each image is pre-split into tiles.
The shape of each tile array is (num_tiles, channels, tile_height, tile_width).
max_image_tiles (int):
The maximum number of tiles any image was potantially split.
Returns:
`Tuple[np.ndarray, List[List[int]]]`: A tuple containing:
- stacked_images (`np.ndarray`):
A numpy array of stacked images with shape
(batch_size, max_num_images, max_image_tiles, channels, tile_height, tile_width).
- all_num_tiles (`List[List[int]]`):
A list of lists containing the number of tiles
for each image in each batch sample.
"""
# Determine output shape
batch_size = len(batch_images)
max_num_images = max([len(images) for images in batch_images])
shapes = [image.shape for images in batch_images for image in images]
_, channels, tile_height, tile_width = shapes[0]
# Initialize the stacked images array with zeros
stacked_images = np.zeros(
(batch_size, max_num_images, max_image_tiles, channels, tile_height, tile_width),
dtype=np.float32,
)
# Fill the stacked images array with the tiled images from the batch
all_num_tiles = []
for i, images in enumerate(batch_images):
num_sample_tiles = []
for j, image in enumerate(images):
num_tiles = image.shape[0]
stacked_images[i, j, :num_tiles] = image
num_sample_tiles.append(num_tiles)
all_num_tiles.append(num_sample_tiles)
return stacked_images, all_num_tiles
def pack_aspect_ratios(aspect_ratios: List[List[Tuple[int, int]]], pad_value: int = 1) -> np.ndarray:
"""
Stack a list of aspect ratios into a numpy array.
Args:
aspect_ratios (`List[List[Tuple[int, int]]]`):
A list of aspect ratios.
pad_value (`int`, *optional*, defaults to 1):
The value to pad the aspect ratios with.
Returns:
`np.ndarray`:
The aspect ratios stacked into a numpy array with shape (batch_size, max_num_images, 2).
"""
batch_size = len(aspect_ratios)
max_num_images = max([len(row) for row in aspect_ratios])
aspect_ratios_stacked = np.full((batch_size, max_num_images, 2), pad_value, dtype=np.int64)
for i, row in enumerate(aspect_ratios):
if len(row) > 0:
aspect_ratios_stacked[i, : len(row)] = np.array(row)
return aspect_ratios_stacked
def convert_aspect_ratios_to_ids(aspect_ratios: List[List[Tuple[int, int]]], max_image_tiles: int) -> np.ndarray:
"""
Convert aspect ratio tuples to unique ids.
For batch padding we use 0, because there might be different number of images in each batch.
The aspect ratio ids start from 1, with 1 corresponding to the first supported aspect ratio.
Args:
aspect_ratios (`List[List[Tuple[int, int]]]`):
A list of aspect ratios for each image in the batch.
max_image_tiles (`int`):
The maximum number of tiles any image can be split into.
Returns:
`np.ndarray`:
The aspect ratios ids as a numpy array with shape (batch_size, max_num_images).
Each id corresponds to the index of the aspect ratio in the list of supported aspect ratios,
offset by 1 (so 0 can be used for padding).
"""
batch_size = len(aspect_ratios)
max_num_images = max([len(row) for row in aspect_ratios])
supported_aspect_ratios = get_all_supported_aspect_ratios(max_image_tiles)
aspect_ratios_ids = np.zeros((batch_size, max_num_images), dtype=np.int64)
for i, sample_aspect_ratios in enumerate(aspect_ratios):
for j, (num_tiles_h, num_tiles_w) in enumerate(sample_aspect_ratios):
aspect_ratios_ids[i, j] = supported_aspect_ratios.index((num_tiles_h, num_tiles_w)) + 1
return aspect_ratios_ids
def to_channel_dimension_format(
image: np.ndarray,
channel_dim: Union[ChannelDimension, str],
input_channel_dim: Optional[Union[ChannelDimension, str]] = None,
) -> np.ndarray:
"""
Converts `image` to the channel dimension format specified by `channel_dim`.
Args:
image (`numpy.ndarray`):
The image to have its channel dimension set.
channel_dim (`ChannelDimension`):
The channel dimension format to use.
input_channel_dim (`ChannelDimension`, *optional*):
The channel dimension format of the input image. If not provided, it will be inferred from the input image.
Returns:
`np.ndarray`:
The image with the channel dimension set to `channel_dim`.
"""
if not isinstance(image, np.ndarray):
raise ValueError(f"Input image must be of type np.ndarray, got {type(image)}")
if input_channel_dim is None:
input_channel_dim = infer_channel_dimension_format(image)
target_channel_dim = ChannelDimension(channel_dim)
if input_channel_dim == target_channel_dim:
return image
if target_channel_dim == ChannelDimension.FIRST:
image = image.transpose((2, 0, 1))
elif target_channel_dim == ChannelDimension.LAST:
image = image.transpose((1, 2, 0))
else:
raise ValueError("Unsupported channel dimension format: {}".format(channel_dim))
return image
# Copied from transformers.models.idefics2.image_processing_idefics2.convert_to_rgb
def convert_to_rgb(image: ImageInput) -> ImageInput:
"""
Converts an image to RGB format. Only converts if the image is of type PIL.Image.Image, otherwise returns the image
as is.
Args:
image (Image):
The image to convert.
"""
if not isinstance(image, PIL.Image.Image):
return image
# `image.convert("RGB")` would only work for .jpg images, as it creates a wrong background
# for transparent images. The call to `alpha_composite` handles this case
if image.mode == "RGB":
return image
image_rgba = image.convert("RGBA")
background = Image.new("RGBA", image_rgba.size, (255, 255, 255))
alpha_composite = Image.alpha_composite(background, image_rgba)
alpha_composite = alpha_composite.convert("RGB")
return alpha_composite
# Modified from transformers.models.idefics2.image_processing_idefics2.make_list_of_images
def make_list_of_images(images: ImageInput) -> List[List[Optional[np.ndarray]]]:
"""
Convert a single image or a list of images to a list of numpy arrays.
Args:
images (`ImageInput`):
A single image or a list of images.
Returns:
A list of numpy arrays.
"""
# If it's a single image, convert it to a list of lists
if is_valid_image(images):
output_images = [[images]]
# If it's a list of images, it's a single batch, so convert it to a list of lists
elif isinstance(images, (list, tuple)) and is_valid_list_of_images(images):
output_images = [images]
# If it's a list of batches, it's already in the right format
elif (
isinstance(images, (list, tuple))
and all(isinstance(images_i, (list, tuple)) for images_i in images)
and any(is_valid_list_of_images(images_i) for images_i in images)
):
output_images = images
else:
raise ValueError(
"Invalid input type. Must be a single image, a list of images, or a list of batches of images."
)
return output_images
def is_valid_list_of_images(images: List):
return images and all(is_valid_image(image) for image in images)
def _validate_size(size: Dict[str, int]) -> None:
if not ("height" in size and "width" in size):
raise ValueError(f"Argument `size` must be a dictionary with keys 'height' and 'width'. Got: {size}")
if size["height"] != size["width"]:
raise ValueError(f"Argument `size` must have the same height and width, got {size}")
def _validate_mllama_preprocess_arguments(do_resize, size, do_pad, max_image_tiles):
if not do_pad:
raise ValueError("MllamaImageProcessor doesn't support `do_pad=False` mode.")
if not do_resize:
raise ValueError("MllamaImageProcessor doesn't support `do_resize=False` mode.")
if max_image_tiles is None or max_image_tiles <= 0:
raise ValueError(f"MllamaImageProcessor `max_image_tiles` must be a positive integer, got {max_image_tiles}.")
_validate_size(size)
class MllamaImageProcessor(BaseImageProcessor):
"""
Constructs a Mllama image processor.
Args:
do_convert_rgb (`bool`, *optional*, defaults to `True`):
Whether to convert the image to RGB. This is useful if the input image is of a different format e.g. RGBA.
Only has an effect if the input image is in the PIL format.
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image.
size (`Dict[str, int]`, *optional*, defaults to `self.size`):
Size of the image tile. Should be a dictionary containing 'height' and 'width' keys, both with integer values.
The height and width values should be equal.
resample (`int`, *optional*, defaults to `Resampling.BILINEAR`):
Resampling filter to use if resizing the image. This can be one of the enum `PILImageResampling`. Only
has an effect if `do_resize` is set to `True`.
do_rescale (`bool`, *optional*, defaults to `True`):
Whether to rescale the image.
rescale_factor (`float`, *optional*, defaults to 0.0):
Rescale factor to rescale the image by if `do_rescale` is set to `True`.
do_normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the image.
image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
Image mean to use for normalization. Only has an effect if `do_normalize` is set to `True`.
image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
Image standard deviation to use for normalization. Only has an effect if `do_normalize` is set to
`True`.
do_pad (`bool`, *optional*, defaults to `True`):
Whether or not to pad the images to the largest height and width in the batch.
max_image_tiles (`int`, *optional*, defaults to 4):
The maximum number of tiles to split the image into.
"""
model_input_names = ["pixel_values", "num_tiles", "aspect_ratio_ids", "aspect_ratio_mask"]
def __init__(
self,
do_convert_rgb: bool = True,
do_resize: bool = True,
size: Optional[Dict[str, int]] = None,
resample: PILImageResampling = PILImageResampling.BILINEAR,
do_rescale: bool = True,
rescale_factor: float = 1 / 255, # 这个也是根据单像素的颜色范围预设好的
do_normalize: bool = True,
image_mean: Optional[Union[float, List[float]]] = None,
image_std: Optional[Union[float, List[float]]] = None,
do_pad: bool = True,
max_image_tiles: int = 4,
**kwargs,
) -> None:
super().__init__(**kwargs)
self.do_convert_rgb = do_convert_rgb
self.do_resize = do_resize
self.size = size if size is not None else {"height": 224, "width": 224}
self.resample = resample
self.do_rescale = do_rescale
self.rescale_factor = rescale_factor
self.do_normalize = do_normalize
self.image_mean = image_mean if image_mean is not None else IMAGENET_STANDARD_MEAN
self.image_std = image_std if image_std is not None else IMAGENET_STANDARD_STD
self.do_pad = do_pad
self.max_image_tiles = max_image_tiles
_validate_mllama_preprocess_arguments(self.do_resize, self.size, self.do_pad, self.max_image_tiles)
def preprocess(
self,
images: ImageInput,
do_convert_rgb: Optional[bool] = None,
do_resize: Optional[bool] = None,
size: Optional[Dict[str, int]] = None,
resample: Optional[PILImageResampling] = None,
do_rescale: Optional[bool] = None,
rescale_factor: Optional[float] = None,
do_normalize: Optional[bool] = None,
image_mean: Optional[Union[float, List[float]]] = None,
image_std: Optional[Union[float, List[float]]] = None,
do_pad: Optional[bool] = None,
max_image_tiles: Optional[int] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
return_tensors: Optional[Union[str, TensorType]] = None,
):
"""
Preprocess a batch of images.
Args:
images (`ImageInput`):
A list of images to preprocess.
do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`):
Whether to convert the image to RGB.
do_resize (`bool`, *optional*, defaults to `self.do_resize`):
Whether to resize the image.
size (`Dict[str, int]`, *optional*, defaults to `self.size`):
Size of the image tile. Should be a dictionary containing 'height' and 'width' keys, both with integer values.
The height and width values should be equal.
resample (`int`, *optional*, defaults to `self.resample`):
Resampling filter to use if resizing the image. This can be one of the enum `PILImageResampling`. Only
has an effect if `do_resize` is set to `True`.
do_rescale (`bool`, *optional*, defaults to `self.do_rescale`):
Whether to rescale the image.
rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`):
Rescale factor to rescale the image by if `do_rescale` is set to `True`.
do_normalize (`bool`, *optional*, defaults to `self.do_normalize`):
Whether to normalize the image.
image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
Image mean to use for normalization. Only has an effect if `do_normalize` is set to `True`.
image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
Image standard deviation to use for normalization. Only has an effect if `do_normalize` is set to
`True`.
do_pad (`bool`, *optional*, defaults to `self.do_pad`):
Whether or not to pad the images to the largest height and width in the batch.
max_image_tiles (`int`, *optional*, defaults to `self.max_image_tiles`):
The maximum number of tiles to split the image into.
input_data_format (`ChannelDimension` or `str`, *optional*):
The channel dimension format for the input image. If unset, the channel dimension format is inferred
from the input image. Can be one of:
- `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
- `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
- `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
return_tensors (`str` or `TensorType`, *optional*):
The type of tensors to return. Can be one of:
- Unset: Return a list of `np.ndarray`.
- `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`.
- `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
- `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
- `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`.
Returns:
`BatchFeature` of the following structure:
- **pixel_values** (`TensorType`): The preprocessed pixel values.
- **aspect_ratio_ids** (`TensorType`): The aspect ratio ids of the images.
- **num_tiles** (`List[List[int]]`): The number of tiles for each image in the batch.
"""
do_convert_rgb = do_convert_rgb if do_convert_rgb is not None else self.do_convert_rgb
do_resize = do_resize if do_resize is not None else self.do_resize
size = size if size is not None else self.size
resample = resample if resample is not None else self.resample
do_rescale = do_rescale if do_rescale is not None else self.do_rescale
rescale_factor = rescale_factor if rescale_factor is not None else self.rescale_factor
do_normalize = do_normalize if do_normalize is not None else self.do_normalize
image_mean = image_mean if image_mean is not None else self.image_mean
image_std = image_std if image_std is not None else self.image_std
do_pad = do_pad if do_pad is not None else self.do_pad
max_image_tiles = max_image_tiles if max_image_tiles is not None else self.max_image_tiles
validate_preprocess_arguments(
do_rescale=do_rescale,
rescale_factor=rescale_factor,
do_normalize=do_normalize,
image_mean=image_mean,
image_std=image_std,
do_resize=do_resize,
size=size,
resample=resample,
)
# extra validation
_validate_mllama_preprocess_arguments(do_resize, size, do_pad, max_image_tiles)
# 1. 就是把一个图片包成一个列表,包了两次
images_list = make_list_of_images(images)
# 2.对于黑白图片,这里就是将每个点复制三个,如151--->[151,151,151]
if self.do_convert_rgb:
# 第一个for先遍历每句话,然后第二个for遍历一句话中的多张图
images_list = [[convert_to_rgb(image) for image in images] for images in images_list]
# 3.此处将图片对象转化为像素点的具体值,返回numpy的array类型,28*24*3
images_list = [[to_numpy_array(image) for image in images] for images in images_list]
batch_images = []
batch_aspect_ratios = []
# iterate over batch samples
for images in images_list:
sample_images = []
sample_aspect_ratios = []
# iterate over images in a batch sample
for image in images:
# convert images to channels first format for faster processing
# LAST is slower for `pad` and not supported by `split_to_tiles`
data_format = ChannelDimension.FIRST # 第一维表示通道
# 4. 维度从28*24*3转化为3*28*24,由于是灰度图,故三个方阵的值都一样
image = to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format)
# do_resize=False is not supported, validated
# 5. resize,使用双线性插值算法将3*28*24的三层都分别resize,转化为3*480*560的图像
image, aspect_ratio = self.resize(
image=image,
size=size,
resample=resample,
max_image_tiles=max_image_tiles,
input_data_format=data_format,
data_format=data_format,
)
# do_pad=False is not supported, validated
# 6. 转成八种大小的一种,这里是1*1个tile,即3*560*560,第480行开始全部补0
image = self.pad(
image=image,
size=size,
aspect_ratio=aspect_ratio, # 型号为1*1的tile(总共八种类型)
input_data_format=data_format,
data_format=data_format,
)
# 7. 这里就是直接把3*560*560中的每个元素*1/255
if do_rescale:
image = self.rescale(
image=image,
scale=rescale_factor, # 1/255
input_data_format=input_data_format,
data_format=data_format,
)
# 8. 这里就是直接把3*560*560中的每个元素减去对应通道的均值,再除以标准差
if do_normalize:
image = self.normalize(
image=image,
mean=image_mean,
std=image_std,
input_data_format=input_data_format,
data_format=data_format,
)
# 1*1的tile,这俩值都是1
num_tiles_height, num_tiles_width = aspect_ratio
# 9. 1*1的tile,就会自动提升一个维度,即为1 * 3*560*560
image = split_to_tiles(image, num_tiles_height, num_tiles_width)
sample_images.append(image)
sample_aspect_ratios.append((num_tiles_height, num_tiles_width))
batch_images.append(sample_images)
batch_aspect_ratios.append(sample_aspect_ratios)
# 10. 此处进行打包,下面举例说明:
# batch_images = [
# [np.random.rand(1, 3, 560, 560)], # 第一个样本,有 1 张图像,并且该图像被分割成了 1 个切割块,每个切割块有 3 个通道
# [np.random.rand(1, 3, 560, 560), np.random.rand(2, 3, 560, 560)], # 第二个样本,有 1 张图像和 2 张图像,其中第二张图像被分割成了 2 个切割块
# ]
# max_image_tiles = 4
# images 的最终形状为 (2, 2, 4, 3, 560, 560):表示两个样本,每个样本最多两个图像,每张图像最多4个切割块,每个切割块有 3 个通道。每个切割块高度宽度都是560
# 第一个样本的图像块被填充到 images[0, 0],其余部分(如 images[0, 1])会是零
# 第二个样本的第一张图像被填充到 images[1, 0],第二张图像的 2 个切割块被填充到 images[1, 1]。
# all_num_tiles 为 [[1], [1, 2]],表示第一个样本有 1 张图像且有 1 个切割块,第二个样本有 2 张图像,第一张图像有 1 个切割块,第二张图像有 2 个切割块。
images, num_tiles = pack_images(batch_images, max_image_tiles)
# 举例说明convert_aspect_ratios_to_ids函数:
# max_image_tiles = 4的时候,可能的宽高比为[(1, 1), (1, 2), (1, 3), (1, 4), (2, 1), (2, 2), (3, 1), (4, 1)]
# 假设有:aspect_ratios = [
# [(1, 1)], # 第一个样本:1 张图像,宽高比为 (1, 1)
# [(1, 2), (2, 1)], # 第二个样本:2 张图像,宽高比分别为 (1, 2) 和 (2, 1)
# ]
# 首先,通过 get_all_supported_aspect_ratios(4) 计算出所有支持的宽高比即上面第二行的八种情况。
# 最终,返回的 aspect_ratios_ids 数组为:array([[1], [2, 5]]),其实就是宽高比的id
aspect_ratio_ids = convert_aspect_ratios_to_ids(batch_aspect_ratios, max_image_tiles=max_image_tiles)
# 举例说明build_aspect_ratio_mask的作用:
# 例如,如果一个图像的宽高比是 (2, 3),表示该图像有 2 行 3 列的切割块(共 6 个切割块),那么会将掩码中的前 6 个切割块设置为 1。
# 再例:假设aspect_ratios = [ [(1, 1)], # 第一个样本:1 张图像,宽高比为 (1, 1)
# [(1, 2), (2, 2)]], # 第二个样本:2 张图像,宽高比分别为 (1, 2) 和 (2, 2)
# max_image_tiles = 4
# 初始状态:aspect_ratio_mask = np.zeros((2, 2, 4), dtype=np.int64)
# 设置第一个切割块:aspect_ratio_mask[:, :, 0] = 1,结果为:aspect_ratio_mask = [[[1, 0, 0, 0], [1, 0, 0, 0]], [[1, 0, 0, 0], [1, 0, 0, 0]]]
# 设置第二个切割块后,最终为:aspect_ratio_mask = [[[1, 0, 0, 0], [0, 0, 0, 0]], [[1, 1, 0, 0], [1, 1, 1, 1]]]
aspect_ratio_mask = build_aspect_ratio_mask(batch_aspect_ratios, max_image_tiles=max_image_tiles)
# images (np.ndarray) with shape (batch_size, max_num_images, max_image_tiles, channels, tile_height, tile_width)
# aspect_ratio_ids (np.ndarray) with shape (batch_size, max_num_images) - aspect ratio ids for each image, padded to max_num_images with 0
# num_tiles (List[List[int]]) with (batch_size, num_images_in_batch) - real number of tiles for each image, not padded
# aspect_ratio_mask (np.ndarray) with shape (batch_size, max_num_images, max_image_tiles) - number of tiles for each image, padded to max_num_images with 0
# 下面就是组装为一个字典
encoded_inputs = BatchFeature(
data={
"pixel_values": images,
"aspect_ratio_ids": aspect_ratio_ids,
"aspect_ratio_mask": aspect_ratio_mask,
},
tensor_type=return_tensors,
)
encoded_inputs["num_tiles"] = num_tiles
return encoded_inputs
def pad(
self,
image: np.ndarray,
size: Dict[str, int],
aspect_ratio: Tuple[int, int],
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
) -> np.ndarray:
"""
Pad an image to the `size` x `aspect_ratio`. For example, if size is {height: 224, width: 224} and aspect ratio is
(1, 2), the image will be padded to 224x448.
Args:
image (`np.ndarray`):
Image to resize.
size (`Dict[str, int]`):
Size of the output image.
aspect_ratio (`Tuple[int, int]`):
The aspect ratio of the image.
data_format (`str` or `ChannelDimension`, *optional*):
The channel dimension format of the image. If not provided, it will be the same as the input image.
input_data_format (`ChannelDimension` or `str`, *optional*):
The channel dimension format of the input image. If not provided, it will be inferred.
Returns:
`np.ndarray`: The padded image.
"""
_validate_size(size)
image_height, image_width = get_image_size(image, channel_dim=input_data_format)
num_tiles_height, num_tiles_width = aspect_ratio
padded_height = num_tiles_height * size["height"]
padded_width = num_tiles_width * size["width"]
pad_size = ((0, padded_height - image_height), (0, padded_width - image_width))
image = pad(
image,
pad_size,
mode=PaddingMode.CONSTANT,
constant_values=0,
data_format=data_format,
input_data_format=input_data_format,
)
return image
def resize(
self,
image: np.ndarray,
size: Dict[str, int],
max_image_tiles: int,
resample: PILImageResampling = PILImageResampling.BILINEAR,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
) -> Union[np.ndarray, Tuple[int, int]]:
"""
Resizes an image to fit within a tiled canvas while maintaining its aspect ratio.
The optimal canvas size is calculated based on the maximum number of tiles and the tile size.
The function first determines the best tile arrangement for the image, then resizes the image
to fit within this canvas. The resized image and the number of tiles along the height and width
dimensions are returned.
Args:
image (`np.ndarray`):
Image to resize.
size (`Dict[str, int]`):
Size of the output image.
max_image_tiles (`int`):
The maximum number of tiles to split the image into.
resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`):
Resampling filter to use when resizing the image.
data_format (`str` or `ChannelDimension`, *optional*):
The channel dimension format of the image. If not provided, it will be the same as the input image.
input_data_format (`ChannelDimension` or `str`, *optional*):
The channel dimension format of the input image. If not provided, it will be inferred.
Returns:
`Union[np.ndarray, Tuple[int, int]]`: The resized image and a tuple containing the number of tiles
along the height and width dimensions.
"""
_validate_size(size)
image_height, image_width = get_image_size(image, channel_dim=input_data_format)
tile_size = size["height"]
canvas_height, canvas_width = get_optimal_tiled_canvas(
image_height=image_height,
image_width=image_width,
max_image_tiles=max_image_tiles,
tile_size=tile_size,
)
num_tiles_height = canvas_height // tile_size
num_tiles_width = canvas_width // tile_size
new_height, new_width = get_image_size_fit_to_canvas(
image_height=image_height,
image_width=image_width,
canvas_height=canvas_height,
canvas_width=canvas_width,
tile_size=tile_size,
)
image = resize(
image,
(new_height, new_width),
resample=resample,
data_format=data_format,
input_data_format=input_data_format,
)
return image, (num_tiles_height, num_tiles_width)
火山引擎开发者社区是火山引擎打造的AI技术生态平台,聚焦Agent与大模型开发,提供豆包系列模型(图像/视频/视觉)、智能分析与会话工具,并配套评测集、动手实验室及行业案例库。社区通过技术沙龙、挑战赛等活动促进开发者成长,新用户可领50万Tokens权益,助力构建智能应用。
更多推荐
所有评论(0)