一个Tensor的生命历程(Pytorch版)-上篇

此篇文章较为硬核,因内容较多,故分为上下两篇

不知道大家是否对Pytorch中的Tensor是如何生成感兴趣,深入理解这个知识可以加深你对深度学习框架的一些印象和操作熟练度。

文中涉及到大量的Pytorch的C++源码,版本为1.4.0a,适合有一定Pytorch源码基础的童鞋观看,同时也涉及到一些python中的C/C++拓展的一些基础知识,其中每一段代码的第一行表明了该代码的文件位置。需要注意有些代码是自动生成的,原始工程中并没有,需要编译。

还要注意一点,因为Pytorch仍在积极开发中,所以代码接口变化还是比较频繁,当你看到本文的时候,有可能展示的源码与master版的略有不同,但是大部分的代码逻辑变动不大,我们只需要知道核心工作原理即可。

那开始吧!

现在有一个Tensor,不,是两个,创建两个rand后的tensor然后加起来。

import torch
res = torch.rand(3, 4)[0] + torch.rand(3, 4)
执行后输出:

tensor([[0.3091, 0.5503, 1.0780, 0.9044],
[0.5770, 0.5245, 0.3225, 1.4672],
[0.1581, 1.0439, 0.3313, 0.9924]])
呃,输出不重要,先将上述代码细分下:

_t1 = torch.rand(3, 4)
_t2 = _t1.getitem(0)
del _t1
_t3 = torch.rand(3, 4)
res = _t2.add(_t3)
del _t2
del _t3

最后res还在

看第一句发生了什么:

_t1 = torch.rand(3, 4) # <–
_t2 = _t1.getitem(0)
del _t1
_t3 = torch.rand(3, 4)
res = _t2.add(_t3)
del _t2
del _t3
其实torch.rand在torch_C._VariableFunctions这个模块中,torch.rand不是一个python的函数,只是一个模块中方法的名称,通过torch.rand调用torch模块中的rand方法,而这个模块是通过python的C/C++拓展机制生成的,实际中torch.rand对应的代码是通过一个yaml文本自动生成的。

这个文件是一个自动生成代码函数的参数列表,Pytorch源码中有很多的代码文件是通过gen.py自动生成的,至于为什么要自动生成,是因为很多的函数代码比较相似,重复性较多,通过自动生成可以避免大部分重复的工作量。

// aten/src/ATen/native/native_functions.yaml

  • func: scalar_tensor(Scalar s, *, ScalarType? dtype=None, Layout? layout=None,
    Device? device=None, bool? pin_memory=None) -> Tensor
  • func: rand(int[] size, *, ScalarType? dtype=None, Layout? layout=None,
    Device? device=None, bool? pin_memory=None) -> Tensor
  • func: rand(int[] size, *, Generator? generator, ScalarType? dtype=None,
    Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
  • func: rand(int[] size, *, Tensor(a!) out) -> Tensor(a!)
  • func: rand(int[] size, *, Generator? generator, Tensor(a!) out) -> Tensor(a!)
  • func: rand_like(Tensor self) -> Tensor
  • func: rand_like(Tensor self, *, ScalarType dtype, Layout layout,
    Device device, bool pin_memory=False) -> Tensor
    通过上述的自动生成代码文件,在如下的代码的${py_method_defs}的位置生成rand以及其他函数的方法。

// tools/autograd/templates/python_torch_functions.cpp

static PyMethodDef torch_functions[] = {
{“arange”, (PyCFunction)THPVariable_arange, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“as_tensor”, (PyCFunction)THPVariable_as_tensor, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“dsmm”, (PyCFunction)THPVariable_mm, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“from_numpy”, (PyCFunction)THPVariable_from_numpy, METH_STATIC | METH_O, NULL},
{“hsmm”, (PyCFunction)THPVariable_hspmm, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“_promote_types”, (PyCFunction)THPVariable__promote_types, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“nonzero”, (PyCFunction)THPVariable_nonzero, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“randint”, (PyCFunction)THPVariable_randint, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“range”, (PyCFunction)THPVariable_range, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“saddmm”, (PyCFunction)THPVariable_sspaddmm, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“sparse_coo_tensor”, (PyCFunction)THPVariable_sparse_coo_tensor, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“spmm”, (PyCFunction)THPVariable_mm, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“tensor”, (PyCFunction)THPVariable_tensor, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“get_device”, (PyCFunction)THPVariable_get_device, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
${py_method_defs}
{NULL}
};
将上述的native_functions.yaml中的函数参数通过生成机制,在上述代码的${py_method_defs}位置,生成新的代码以及新的文件,我们可以看到我们的”rand”:

//torch/csrc/autograd/generated/python_torch_functions.cpp

static PyMethodDef torch_functions[] = {
{“arange”, (PyCFunction)(void()(void))THPVariable_arange, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL}, {“as_tensor”, (PyCFunction)(void()(void))THPVariable_as_tensor, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“dsmm”, (PyCFunction)(void()(void))THPVariable_mm, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL}, {“from_numpy”, (PyCFunction)THPVariable_from_numpy, METH_STATIC | METH_O, NULL}, {“hsmm”, (PyCFunction)(void()(void))THPVariable_hspmm, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“nonzero”, (PyCFunction)(void()(void))THPVariable_nonzero, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL}, {“randint”, (PyCFunction)(void()(void))THPVariable_randint, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“range”, (PyCFunction)(void()(void))THPVariable_range, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL}, {“saddmm”, (PyCFunction)(void()(void))THPVariable_sspaddmm, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“sparse_coo_tensor”, (PyCFunction)(void()(void))THPVariable_sparse_coo_tensor, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL}, {“spmm”, (PyCFunction)(void()(void))THPVariable_mm, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“tensor”, (PyCFunction)(void()(void))THPVariable_tensor, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL}, {“get_device”, (PyCFunction)(void()(void))THPVariable_get_device, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
// 这部分以上与上面的代码相同,下面为自动生成的代码
{“numel”, (PyCFunction)(void()(void))THPVariable_numel, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL}, {“and“, (PyCFunction)(void()(void))THPVariable___and__, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},

{“quantized_rnn_tanh_cell”, (PyCFunction)(void()(void))THPVariable_quantized_rnn_tanh_cell, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL}, {“rand”, (PyCFunction)(void()(void))THPVariable_rand, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“rand_like”, (PyCFunction)(void()(void))THPVariable_rand_like, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL}, {“randint_like”, (PyCFunction)(void()(void))THPVariable_randint_like, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“randn”, (PyCFunction)(void()(void))THPVariable_randn, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL}, {“randn_like”, (PyCFunction)(void()(void))THPVariable_randn_like, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“randperm”, (PyCFunction)(void()(void))THPVariable_randperm, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL}, … {“zeros”, (PyCFunction)(void()(void))THPVariable_zeros, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{“zeros_like”, (PyCFunction)(void(*)(void))THPVariable_zeros_like, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL},
{NULL}
};
由上面代码可以看到,”rand”对应的绑定函数为THPVariable_rand。具体探究这个函数之前,我们首先需要初始化,因为这个函数要绑定在python端,将上述的一堆方法(tp_methods)与类型对象(PyTypeObject)绑定:

// tools/autograd/templates/python_torch_functions.cpp

static PyTypeObject THPVariableFunctions = {
PyVarObject_HEAD_INIT(NULL, 0)
“torch._C._VariableFunctions”, /* tp_name / Py_TPFLAGS_DEFAULT, / tp_flags / NULL, / tp_doc / torch_functions, / tp_methods */

};
然后进行初始化,将上述的类型对象初始化为python中的模块:

void initTorchFunctions(PyObject* module) {
if (PyType_Ready(&THPVariableFunctions) < 0) {
throw python_error();
}
Py_INCREF(&THPVariableFunctions);
if (PyModule_AddObject(module, “_VariableFunctions”, (PyObject*)&THPVariableFunctions) < 0) {
throw python_error();
}
}
这样我们在python端调用的时候会在生成的torch_C._VariableFunctions中找这个方法:

for name in dir(C._VariableFunctions): if name.startswith(‘_‘):
continue
globals()[name] = getattr(_C._VariableFunctions, name)
好,现在我们具体讨论一下{“rand”, (PyCFunction)(void(*)(void))THPVariable_rand, METH_VARARGS | METH_KEYWORDS | METH_STATIC, NULL}这个方法对应的函数吧。

// torch/csrc/autograd/generated/python_torch_functions.cpp

static PyObject * THPVariable_rand(PyObject* self_, PyObject* args, PyObject* kwargs)
{
HANDLE_TH_ERRORS
static PythonArgParser parser({
“rand(IntArrayRef size, *, DimnameList? names, ScalarType dtype=None, Layout layout=torch.strided, Device device=None, bool pin_memory=False, bool requires_grad=False)”,
“rand(IntArrayRef size, *, Generator generator, DimnameList? names, ScalarType dtype=None, Layout layout=torch.strided, Device device=None, bool pin_memory=False, bool requires_grad=False)”,
“rand(IntArrayRef size, *, Generator generator, Tensor out=None, ScalarType dtype=None, Layout layout=torch.strided, Device device=None, bool pin_memory=False, bool requires_grad=False)”,
“rand(IntArrayRef size, *, Tensor out=None, ScalarType dtype=None, Layout layout=torch.strided, Device device=None, bool pin_memory=False, bool requires_grad=False)”,
}, /traceable=/true);

ParsedArgs<9> parsed_args;
auto r = parser.parse(args, kwargs, parsed_args);

if (r.idx == 0) {
auto size = r.intlist(0);
auto __names = r.toDimnameListOptional(1);
c10::optional names = __names ? c10::make_optional(DimnameList(__names.value())) : c10::nullopt;
auto dtype = r.scalartype(2);
auto device = r.device(4);
const auto options = TensorOptions()
.dtype(dtype)
.device(device)
.layout(r.layout(3).layout)
.requires_grad(r.toBool(6))
.pinned_memory(r.toBool(5));
return wrap(dispatch_rand(size, names, options));

} else if (r.idx == 3) { // 最终执行到这一个分支
if (r.isNone(1)) {
auto size = r.intlist(0);
auto dtype = r.scalartype(2);
auto device = r.device(4);
const auto options = TensorOptions()
.dtype(dtype)
.device(device)
.layout(r.layout(3).layout)
.requires_grad(r.toBool(6))
.pinned_memory(r.toBool(5));
return wrap(dispatch_rand(size, options));
} else {
check_out_type_matches(r.tensor(1), r.scalartype(2), r.isNone(2),
r.layout(3), r.isNone(3),
r.device(4), r.isNone(4));
return wrap(dispatch_rand(r.intlist(0), r.tensor(1)).set_requires_grad(r.toBool(6)));
}
}
Py_RETURN_NONE;
END_HANDLE_TH_ERRORS
}
可以看到上述函数最终实际执行的是dispatch_rand,这里需要注意,这个函数释放了GIL锁,这会使当前的执行代码和python中执行的代码互不影响:

// torch/csrc/autograd/generated/python_torch_functions_dispatch.h
inline Tensor dispatch_rand(IntArrayRef size, const TensorOptions & options) {
maybe_initialize_cuda(options);
/* 释放GIL锁 */
AutoNoGIL no_gil;
return torch::rand(size, generator, options);
}
然后我们进入torch::rand,这里有一点需要注意,在torch::rand这个函数中我们最终返回的是autograd::make_variable后的tensor,也就是说如果我们不需要differentiable的tensor的话,是可以直接返回at::rand。

这也就是为什么在Pytorch的C++前端中提到如果直接使用at::rand构造的Tensor是没有自动求导功能的:

// torch/csrc/autograd/generated/variable_factories.h

inline at::Tensor rand(at::IntArrayRef size, const at::TensorOptions & options = {}) {
torch::jit::Node* node = nullptr;
std::shared_ptr tracer_state;
if (jit::tracer::isTracing()) { // 这个分支不会进入,因为我们并没有使用Jit
tracer_state = jit::tracer::getTracingState();
at::Symbol op_name;
op_name = jit::Symbol::fromQualString(“aten::rand”);
node = tracer_state->graph->create(op_name, /num_outputs=/0);
jit::tracer::recordSourceLocation(node);
jit::tracer::addInputs(node, “size”, size);
jit::tracer::addInputs(node, “options”, options);
tracer_state->graph->insertNode(node);

jit::tracer::setTracingState(nullptr);

}
at::Tensor tensor = ([&]() {
at::AutoNonVariableTypeMode non_var_type_mode(true);
return at::rand(size, at::TensorOptions(options).is_variable(false));
})();
at::Tensor result =
autograd::make_variable(std::move(tensor), /requires_grad=/options.requires_grad());
if (tracer_state) {
jit::tracer::setTracingState(std::move(tracer_state));
jit::tracer::addOutput(node, result);
}
return result;
}
然后我们继续进入at::rand:

// build/aten/src/ATen/Functions.h

static inline Tensor rand(IntArrayRef size, const TensorOptions & options) {

ifdef USE_STATIC_DISPATCH

return TypeDefault::rand(size, options);

else // 从以下开始执行

globalLegacyTypeDispatch().initForTensorTypeSet(at::detail::multi_dispatch_tensor_type_set(options));
static auto table = globalATenDispatch().getOpTable("aten::rand(int[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor");
return table->callUnboxed<Tensor, IntArrayRef, const TensorOptions &>(size, options);

endif

}
我们可以看到上述的代码中getOpTable函数的参数是我们具体调用函数的一些string,也就是说getOpTable方法可以根据字符串类型的表示找到相对应的函数,我们先看看globalATenDispatch()是什么:

// aten/src/ATen/core/ATenDispatch.cpp
// 像不像单例?
ATenDispatch & globalATenDispatch() {
static ATenDispatch singleton;
return singleton;
}
很像设计模式中的单例模式吧?那ATenDispatch又是什么?看到registerOp这个方法没,多么的熟悉啊,显然这是一个op注册机制的类。

// aten/src/ATen/core/ATenDispatch.h

class CAFFE2_API ATenDispatch {
public:
template
ATenDispatch& registerOp(TensorTypeId id, const char* schema, FuncType* fn) {
std::lock_guard lock(mutex_);
if (op_tables_.find(schema) == op_tables_.end()) {
op_tables_.insert(std::make_pair(schema, ATenOpTable(schema)));
}
op_tables_.at(schema).registerOp(id, reinterpret_cast(fn));
return *this;
}

ATenDispatch& registerFallbackBoxedOp(TensorTypeId id, FallbackBoxedFunction* fn) {
std::lock_guard lock(mutex_);
boxed_fallback_table_[static_cast(id)] = fn;
return *this;
}

const ATenOpTable* getOpTable(const char* schema) const {
auto iter = op_tables_.find(schema);
TORCH_CHECK(iter != op_tables_.end(),
“No functions are registered for schema “, schema);
return &iter->second;
}

FallbackBoxedFunction* getFallbackBoxedOp(TensorTypeId tid) const {
return boxed_fallback_table_[static_cast(tid)];
}

private:
std::unordered_map op_tables_;
FallbackBoxedFunction* boxed_fallback_table_[static_cast(TensorTypeId::NumTensorIds)] = {nullptr};
std::mutex mutex_;
};
那么与std::string组成map的ATenOpTable又是什么呢?下面的介绍已经比较清楚了,这个类储存了不同backend下的实现方法,同时也可以应用于Variables。

// ATenOpTable stores the implementations for each backend, in addition to
// an implementation for variables.

// aten/src/ATen/core/ATenDispatch.h
class CAFFE2_API ATenOpTable {
public:
ATenOpTable(std::string schema)
: schema_(std::move(schema)) {}

// NB: No universal forwarding
template
Result callUnboxed(Args… args) const;

private:

void registerOp(TensorTypeId tid, void* fn) {
TORCH_CHECK(function_table_[static_cast(tid)] == nullptr,
“Attempting to register function for schema “, schema_,
” and tensor type “, toString(tid),
” but there is already a function registered”);
function_table_[static_cast(tid)] = fn;
}

C10_NORETURN void reportError(TensorTypeId tid) const;

friend class ATenDispatch;

std::string schema_;
void* function_table_[static_cast(TensorTypeId::NumTensorIds)] = {nullptr};
};
好了,回到上面rand函数中的最后一句return table->callUnboxed(size, options);。我们可以看到table就是ATenOpTable类的一个实例,而callUnboxed是它的一个方法,这个方法根据传递的模板参数返回了特定的函数:

// build/aten/src/ATen/TypeDefault.cpp

Tensor rand(IntArrayRef size, const TensorOptions & options) {
const DeviceGuard device_guard(options.device());
return at::native::rand(size, options);
}
进入at::native::rand:

// aten/src/ATen/native/TensorFactories.cpp

Tensor rand(IntArrayRef size, const TensorOptions& options) {
return native::rand(size, nullptr, options);
}
进入native::rand:

// aten/src/ATen/native/TensorFactories.cpp

Tensor rand(IntArrayRef size, Generator* generator, const TensorOptions& options) {
auto result = at::empty(size, options);
return result.uniform_(0, 1, generator);
}
进入at::empty:

// build/aten/src/ATen/Functions.h

static inline Tensor empty(IntArrayRef size, const TensorOptions & options, c10::optional memory_format) {

ifdef USE_STATIC_DISPATCH

switch(tensorTypeIdToBackend(impl::dispatchTypeId(at::detail::multi_dispatch_tensor_type_set(options)))) {
    case Backend::CPU:
        return CPUType::empty(size, options, memory_format);
        break;
    case Backend::SparseCPU:
        return SparseCPUType::empty(size, options, memory_format);
        break;
    default:
        AT_ERROR("empty not implemented for ", at::toString(at::detail::multi_dispatch_tensor_type_set(options)));
}

else

globalLegacyTypeDispatch().initForTensorTypeSet(at::detail::multi_dispatch_tensor_type_set(options));
static auto table = globalATenDispatch().getOpTable("aten::empty.memory_format(int[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor");
return table->callUnboxed<Tensor, IntArrayRef, const TensorOptions &, c10::optional<MemoryFormat>>(size, options, memory_format);

endif

}
这次继续按照之前的方式来找到”aten::empty.memory_format(int[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor”这个op。需要注意这个op函数也是自动生成的,对应不同的backend。

// aten/src/ATen/native/native_functions.yaml

  • func: empty.memory_format(int[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor
    dispatch:
    CPU: empty_cpu
    CUDA: empty_cuda
    MkldnnCPU: empty_mkldnn
    SparseCPU: empty_sparse
    SparseCUDA: empty_sparse
    所以最终table中Unbox后的函数为:

// build/aten/src/ATen/CPUType.cpp

Tensor empty(IntArrayRef size, const TensorOptions & options, c10::optional memory_format) {
const DeviceGuard device_guard(options.device());
return at::native::empty_cpu(size, options, memory_format);
}
我们进入at::native::empty_cpu。

// aten/src/ATen/native/TensorFactories.cpp

Tensor empty_cpu(IntArrayRef size, const TensorOptions& options, c10::optional optional_memory_format) {
AT_ASSERT(options.device().type() == DeviceType::CPU);
AT_ASSERT(!options.is_variable()); // is_variable should have been ‘unpacked’ // TODO: remove this when Variable and Tensor are merged
check_size_nonnegative(size);

c10::Allocator* allocator;
if (options.pinned_memory()) {
allocator = detail::getCUDAHooks().getPinnedMemoryAllocator();
} else {
allocator = at::getCPUAllocator(); // 执行这句
}

int64_t nelements = prod_intlist(size);
auto dtype = options.dtype();
auto storage_impl = c10::make_intrusive(
dtype,
nelements,
allocator->allocate(nelements * dtype.itemsize()),
allocator,
/resizeable=/true);

auto tensor = detail::make_tensor(std::move(storage_impl), at::TensorTypeId::CPUTensorId);
// Default TensorImpl has size [0]
if (size.size() != 1 || size[0] != 0) {
tensor.unsafeGetTensorImpl()->set_sizes_contiguous(size);
}

auto memory_format = optional_memory_format.value_or(MemoryFormat::Contiguous);
tensor.unsafeGetTensorImpl()->empty_tensor_restride(memory_format);
return tensor;
}
这个时候因为我们调用的是CPU版的empty,这时需要在CPU申请空间,首先得到正确的申请空间的“方法”函数,赋予c10::Allocator*:

先是这个:

// aten/src/ATen/Context.cpp

Allocator* getCPUAllocator() {
return getTHDefaultAllocator();
}
深入:

// aten/src/TH/THAllocator.cpp

at::Allocator* getTHDefaultAllocator() {
return c10::GetCPUAllocator();
}
再深入:

// c10/core/CPUAllocator.cpp

at::Allocator* GetCPUAllocator() {
return GetAllocator(DeviceType::CPU);
}
再深入,可以发现这个allocator是allocator_array中的一个,在下面的函数GetAllocator中根据索引标号来取出:

// c10/core/Allocator.cpp

at::Allocator* GetAllocator(const at::DeviceType& t) {
auto* alloc = allocator_array[static_cast(t)];
AT_ASSERTM(alloc, “Allocator for “, t, ” is not set.”);
return alloc;
}
allocator_array这个东西是怎么来的?其实它是一个全局变量,用来储存各种allocator,同时配备了SetAllocator和GetAllocator来设置和获取相应的分配器:

// c10/core/Allocator.cpp

C10_API at::Allocator* allocator_array[at::COMPILE_TIME_MAX_DEVICE_TYPES];

// 通过SetAllocator函数将设备的类型索引与alloc联系成一个哈希表
void SetAllocator(at::DeviceType t, at::Allocator* alloc) {
allocator_array[static_cast(t)] = alloc;
}
然后使用REGISTER_ALLOCATOR来注册这些alloc。

// c10/core/Allocator.h

template
struct AllocatorRegisterer {
explicit AllocatorRegisterer(Allocator* alloc) {
SetAllocator(t, alloc);
}
};

define REGISTER_ALLOCATOR(t, f) \

namespace { \
static AllocatorRegisterer g_allocator_d(f); \
}
比如,把DefaultCPUAllocator注册为DeviceType::CPU,而DeviceType::CPU就是枚举成员,对应着一个数字。

// c10/core/CPUAllocator.cpp

static DefaultCPUAllocator g_cpu_alloc;

REGISTER_ALLOCATOR(DeviceType::CPU, &g_cpu_alloc);
而DefaultCPUAllocator就是我们在CPU中开辟空间实际要调用的alloc类,它继承了at::Allocator:

// c10/core/CPUAllocator.cpp

struct C10_API DefaultCPUAllocator final : at::Allocator {
DefaultCPUAllocator() {}
~DefaultCPUAllocator() override {}
at::DataPtr allocate(size_t nbytes) const override {
void* data = alloc_cpu(nbytes);
if (FLAGS_caffe2_report_cpu_memory_usage && nbytes > 0) {
getMemoryAllocationReporter().New(data, nbytes);
return {data, data, &ReportAndDelete, at::Device(at::DeviceType::CPU)};
}
return {data, data, &free_cpu, at::Device(at::DeviceType::CPU)};
}

static void ReportAndDelete(void* ptr) {
if (!ptr) {
return;
}
getMemoryAllocationReporter().Delete(ptr);
free_cpu(ptr);
}

at::DeleterFnPtr raw_deleter() const override {
if (FLAGS_caffe2_report_cpu_memory_usage) {
return &ReportAndDelete;
}
return &free_cpu;
}

protected:
static MemoryAllocationReporter& getMemoryAllocationReporter() {
static MemoryAllocationReporter reporter_;
return reporter_;
}

};
其中实际的开辟函数alloc_cpu和free_cpu,这两个函数在开辟空间和删除空间的时候会被调用:

// c10/core/CPUAllocator.cpp

void* alloc_cpu(size_t nbytes) {
if (nbytes == 0) {
return nullptr;
}
// We might have clowny upstream code that tries to alloc a negative number
// of bytes. Let’s catch it early.
CAFFE_ENFORCE(
((ptrdiff_t)nbytes) >= 0,
“alloc_cpu() seems to have been called with negative number: “, nbytes);

void* data;

ifdef ANDROID

data = memalign(gAlignment, nbytes);

elif defined(_MSC_VER)

data = _aligned_malloc(nbytes, gAlignment);

else

int err = posix_memalign(&data, gAlignment, nbytes);
if (err != 0) {
CAFFE_THROW(
“DefaultCPUAllocator: can’t allocate memory: you tried to allocate “,
nbytes,
” bytes. Error code “,
err,
” (“,
strerror(err),
“)”);
}

endif

CAFFE_ENFORCE(
data,
“DefaultCPUAllocator: not enough memory: you tried to allocate “,
nbytes,
” bytes. Buy new RAM!”);

// move data to a thread’s NUMA node
NUMAMove(data, nbytes, GetCurrentNUMANode());
CHECK(
!FLAGS_caffe2_cpu_allocator_do_zero_fill ||
!FLAGS_caffe2_cpu_allocator_do_junk_fill)
<< “Cannot request both zero-fill and junk-fill at the same time”;
if (FLAGS_caffe2_cpu_allocator_do_zero_fill) {
memset(data, 0, nbytes);
} else if (FLAGS_caffe2_cpu_allocator_do_junk_fill) {
memset_junk(data, nbytes);
}

return data;
}

void free_cpu(void* data) {

ifdef _MSC_VER

_aligned_free(data);

else

free(data);

endif

}

未完待续,请期待下一篇~

关注Oldpan博客,同步更新博客最新消息,持续酝酿深度学习质量文。

声明:文中观点不代表本站立场。本文传送门:https://eyangzhen.com/202628.html

(0)
联系我们
联系我们
分享本页
返回顶部