I feel a bit tired, mainly because I researched how to optimize resources in machine-learning model inference very hard. Let's relax on the weekend.
survival
,
social order
, and entertainment
in this order,
as we no longer fight for survival
, or
social order
at some point, everything ends up to
entertainment
, i.e., doing things just for fun.Let's consider a simpler situation in which I get to incrase one of the three numbers by 1.
a <= b <= c
I have three options
(a + 1) * b * c
= abc + bc
a * (b + 1) * c
= abc + ac
a * b * (c + 1)
= abc + ab
Because a <= b <= c, bc >= ac >= ab. Therefore,
(a + 1) * b * c
is the largest option, i.e., increaseing
the smallest number of three maximizing their product.
Is it guaranteed that maximizing the product for each step leads to the maximum value in the end after applying the operation 5 times? I don't know…
torch.no_grad()
to disable gradient calculation and save
memory.
Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward(). It will reduce memory consumption for computations that would otherwise have requires_grad=True.
I tried with our project, and (only) see a slight improvement on the memory usage. Or, it's totally possible that I misunderstood something and applied it in the wrong way (I don't see many web pages on the function usage).
Also, I was browing the web pages of TorchScript, but couldn't figure it out.
Cheese 10g Protein shake 10g Sashimi 0g Tofu 0g Pork 0g Bacon Egg 10g
Total carbohydrate 30g
MUST:
TODO: