$ curl ascii.live/rick
tfffffttffffffLLffffttttttttttt1tffffffffffttttttt11111tfffffffLfttfff
fttfffttfffffffffttfftttttt11ttt1ttfffffftttttt11tt111111ttffffLLLfttf
LfttttffLLLfftttttfLftttttttfffft1tffffftttffftt11t111tt111ttfffLLfftt
LLffttfLLLfttfffftfLfttttttffffffttttftttffffffftt1111tftt1111tffLLftt
LLLLftffttttfLLffftfftttttftttttttffttttffffffffftt111tftttttttttfLLft
LLLfttttffffffffftffttttttiii,::::i1ttttffffffffftt1111t11tttttttttttt
LLLftffftfLfttfffLLLfttt11:,,,,,,,:::itttttfffffft11111tft1ttt1tffttt1
LLfftfLLftttfLfffLLLLfttt1i::::,,::,,,itt11tfffft111111ffft1tt11tt1tt1
fttttfffffttLLLLfLLLftttttiiiiii1i,:,,:ttt111ttt11tt111tffttfftt1ttfft
ttttttttttttfLLLffLLftttti,iiii111i,:,iftftt11ttt11111tffftffffttttttt
ffffffffttfttfLLfffLfttt1,,,ii,iiii,::1ffttt1tffft11111ffttfffttt11ttt
fffffffftfffttffftfftttt1,,i1i,,iii,:itft11tttttfttt111tt1tffttffttfff
fffffffttttttttttffftttt1,,iiii111iii1t111tfft1tttt111111t1ttttfftttff
fffffftt1ttt111ttffftttti,,iiiiiiiii1111ttfffft11111111tffttttttt11tff
fttttt1ttfft1111ttfftttt1,,,iiii,,itt11tffffffft1111111tttt111tftt11tt
tttt11tfffftttt11tfftttt1i,,,,ii,,it1111tftfffft11t1111ttt111t1tfftt11
fffftttffftttfft11tft111i,,,,,ii,,i111111tttft111tt1111ttt11tt1tffffft
ftfft11tfffttfttt11i,,:ii,,,,iiiii1,11tt111t11111tt1111tt111tft1tfffft
ffft1111tfttftii,:,....,1i,,,,ii111,,:,i1tt11tt11111111111t11tt1tffft1
tt11ttt1111i,:,.......,i1tiiii,,11i,,,..,:,i1t1t111t1111111111t1tffttt
111ttttt1i:...........,ii111iii11i:.,,......,:,1t111111111tttt111ttfLf
11tttt11t,..............,::,,i1ti:,............,1t1111111tttffttttffLf
11tttttt1,...............:,:,,,,,:..............,tt111111tttffftttffff
11tttttt1................,,,,,,,,:..............:t11111111ttffttttffLL
11tttttti............... .:,,,,,:,..............:1111111111tttttttffLL
11tttttt,............. .:,,,,,,:::..............:111tt11tt111ttt1tfffL
11ttttt1,..............:,i,,i,::::..............:1ttt1111tt11tff1tffff
11ttttt,..............:,,,,,i:::::..............,1111111111111tt11tttt
1tttttt:..............:,,,::,:::::..............,t1111111111111t11tttt
1ttttt1,........... .,,:,...,:::::..............,11i,,i111tttt1111tttt
11tttt,............,..... ...,::::...............:i,:,,i1ttttt11111ttt
111111,............,.........,::::,..............,:::,ii1tt11111111111
111111,............ .........,,,::,..............::::,i11tt11111111111
1111111:....... .............,,:,,..............,::::i111t11111111111
11111111:....,,,,.............,:::,,..................,1t1111111111111
111111111iiii11t, .............,:,..,........ ..:1tttttt11111111
import math
for _ in range(int(input())):
= map(int, input().split())
a, b print(math.lcm(a, b))
As I looked at the examples, I noticed
lcm(a, b) % a == lcm(a, b) % b == 0
but I cannot think of why this is guaranteed to be the answer.
Rayan 2024 Selection Round Editorial
m % a == m % b == x
then
(m - 1) % a == (m - 1) % b == (x - 1)
===
If m % a = 1, this means when m is divided by a, the remainder is 1. Mathematically:
m = k * a + 1
for some integer k.
Subtracting 1 from m:
m - 1 = k * a
Now, (m - 1) % a:
(k * a) % a = 0
because k * a is a multiple of a.
Thus, if m % a = 1, then (m - 1) % a = 0.
===
Therefore, m % a == m % b == 0
is the optimal, and
because m must be divisible by both a
and b
,
lcm(a, b)
is the answer.
Two pointers!
for _ in range(int(input())):
= map(int, input().split())
n, m, k list[str] = list(input())
s:
int = 0
li: int = 0
ri: int = 0
cnt: int = 0
res: while ri < n:
if s[ri] == "1":
= ri
li = ri + 1
ri = 0
cnt else:
+= 1
cnt if cnt == m:
+= 1
res = ri
li = ri + k
ri = 0
cnt else:
+= 1
ri print(res)
AWS developed several chips optimized for training/inference in Amazon EC2.
AI Accelerator - AWS Trainium - AWS
AWS Trainium is the machine learning (ML) chip that AWS purpose built for deep learning (DL) training of 100B+ parameter models.
Trainium-based Amazon EC2 Trn1 instances solve this challenge by delivering faster time to train while offering up to 50% cost-to-train savings over comparable EC2 instances.
Amazon EC2 Trn1 Instance must be created/used to leverage the chip.
AI Chip - AWS Inferentia - AWS
AWS Inferentia accelerators are designed by AWS to deliver high performance at the lowest cost in Amazon EC2 for your deep learning (DL) and generative AI inference applications.
EC2 Inf1 Instance must be used.
AWS Trainium is for training, and AWS Inferentia is for inference tasks.
Inference Sdk - AWS Neuron - AWS
SDK to optimize machine learning on AWS Trainium and AWS Inferentia accelerators.
Amazon SageMaker: as I browsed the website Getting Started with Machine Learning on Amazon SageMaker - AWS, it is a fully-managed service that provides Low/NoCode platform for the full cycle of machine-learning.
Amazon Titan is a family of AI models that are developed by AWS and only available on Amazon Bedrock
Foundation Model for Generative AI - Amazon Titan - AWS
Amazon Titan covers most of the Generative AI use cases except vision capabilities. They offer models for text/image generation, and they also have embedding models that can convert text into vector expressions so that the vectors (embeddings) can be used to retrieve relevant information by semantic/similarity search.
Ollama is an application that can run a variety of large language models.
To run Llama 3.2:
ollama run llama3.2
It will download the model, host the LLM to make it available in localhost, and start an interactive prompt. Even after exiting the interactive prompt, the LLM will be hosted until the Ollama application is closed.
As default, it uses port 11434
$ curl http://localhost:11434/v1/chat/completions \
-d '{
"model": "llama3.2",
"messages": [
{
"role": "user",
"content": "Hello!"
}
]
}'
{
"id": "chatcmpl-363",
"object": "chat.completion",
"created": 1733205517,
"model": "llama3.2",
"system_fingerprint": "fp_ollama",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 27,
"completion_tokens": 26,
"total_tokens": 53
}
}
It’s also possible to host an embedding model.
$ ollama pull mxbai-embed-large
$ curl http://localhost:11434/api/embeddings \
-d '{
"model": "mxbai-embed-large",
"prompt": "Llamas are members of the camelid family"
}'
{
"embedding": [
0.5835007429122925,
1.1735061407089233,
0.6405693888664246,
...
]
}
Docker image is available
services:
ollama:
container_name: ollama
image: ollama/ollama:0.4.7
ports:
- 11434:11434
volumes:
- ollama_data:/root/.ollama
volumes:
ollama_data:
164.0 lb
Tofu 35g Lunchly 10g Protein bar 15g Vegetable sticks 0g Bagels 0g Protein shake 50g Egg 70g
total protein -> 180 g (>= 140)
MUST: