LabelTensor#
- class LabelTensor(x, labels, *args, **kwargs)[source]#
Bases:
Tensor
Torch tensor with a label for any column.
Construct a LabelTensor by passing a tensor and a list of column labels. Such labels uniquely identify the columns of the tensor, allowing for an easier manipulation.
- Parameters:
- Example:
>>> from pina import LabelTensor >>> tensor = LabelTensor(torch.rand((2000, 3)), ['a', 'b', 'c']) >>> tensor tensor([[6.7116e-02, 4.8892e-01, 8.9452e-01], [9.2392e-01, 8.2065e-01, 4.1986e-04], [8.9266e-01, 5.5446e-01, 6.3500e-01], ..., [5.8194e-01, 9.4268e-01, 4.1841e-01], [1.0246e-01, 9.5179e-01, 3.7043e-02], [9.6150e-01, 8.0656e-01, 8.3824e-01]]) >>> tensor.extract('a') tensor([[0.0671], [0.9239], [0.8927], ..., [0.5819], [0.1025], [0.9615]]) >>> tensor['a'] tensor([[0.0671], [0.9239], [0.8927], ..., [0.5819], [0.1025], [0.9615]]) >>> tensor.extract(['a', 'b']) tensor([[0.0671, 0.4889], [0.9239, 0.8207], [0.8927, 0.5545], ..., [0.5819, 0.9427], [0.1025, 0.9518], [0.9615, 0.8066]]) >>> tensor.extract(['b', 'a']) tensor([[0.4889, 0.0671], [0.8207, 0.9239], [0.5545, 0.8927], ..., [0.9427, 0.5819], [0.9518, 0.1025], [0.8066, 0.9615]])
- static vstack(label_tensors)[source]#
Stack tensors vertically. For more details, see
torch.vstack()
.- Parameters:
label_tensors (list(LabelTensor)) – the tensors to stack. They need to have equal labels.
- Returns:
the stacked tensor
- Return type:
- clone(*args, **kwargs)[source]#
Clone the LabelTensor. For more details, see
torch.Tensor.clone()
.- Returns:
A copy of the tensor.
- Return type:
- to(*args, **kwargs)[source]#
Performs Tensor dtype and/or device conversion. For more details, see
torch.Tensor.to()
.
- select(*args, **kwargs)[source]#
Performs Tensor selection. For more details, see
torch.Tensor.select()
.
- cuda(*args, **kwargs)[source]#
Send Tensor to cuda. For more details, see
torch.Tensor.cuda()
.
- cpu(*args, **kwargs)[source]#
Send Tensor to cpu. For more details, see
torch.Tensor.cpu()
.
- extract(label_to_extract)[source]#
Extract the subset of the original tensor by returning all the columns corresponding to the passed
label_to_extract
.
- detach()[source]#
Returns a new Tensor, detached from the current graph.
The result will never require gradient.
This method also affects forward mode AD gradients and the result will never have forward mode AD gradients.
Note
Returned Tensor shares the same storage with the original one. In-place modifications on either of them will be seen, and may trigger errors in correctness checks.
- requires_grad_(requires_grad=True) Tensor [source]#
Change if autograd should record operations on this tensor: sets this tensor’s
requires_grad
attribute in-place. Returns this tensor.requires_grad_()
’s main use case is to tell autograd to begin recording operations on a Tensortensor
. Iftensor
hasrequires_grad=False
(because it was obtained through a DataLoader, or required preprocessing or initialization),tensor.requires_grad_()
makes it so that autograd will begin to record operations ontensor
.- Parameters:
requires_grad (bool) – If autograd should record operations on this tensor. Default:
True
.
Example:
>>> # Let's say we want to preprocess some saved weights and use >>> # the result as new weights. >>> saved_weights = [0.1, 0.2, 0.3, 0.25] >>> loaded_weights = torch.tensor(saved_weights) >>> weights = preprocess(loaded_weights) # some function >>> weights tensor([-0.5503, 0.4926, -2.1158, -0.8303]) >>> # Now, start to record operations done to weights >>> weights.requires_grad_() >>> out = weights.pow(2).sum() >>> out.backward() >>> weights.grad tensor([-1.1007, 0.9853, -4.2316, -1.6606])
- append(lt, mode='std')[source]#
Return a copy of the merged tensors.
- Parameters:
lt (LabelTensor) – The tensor to merge.
mode (str) – {‘std’, ‘first’, ‘cross’}
- Returns:
The merged tensors.
- Return type: