Today is a peaceful, not hectic day compared to those days recently.
Taking steroids for my skin diseases boost my energy (to some extent), and I can finish things with a bit of sleep. I know it's not good for my health in the long run, but I wish I could maintain the hormone balance.
With the given operations, there will be groups of students that share the same sets of books. When we think about it as a graph, it consists of cycles. The output/answer should be the size of each cycle.
We can use the Disjoint Set Union algorithm to group students and count the size of each group.
def find_parent(i: int, parent: list[int]) -> int:
if parent[parent[i]] == parent[i]:
return parent[i]
return find_parent(parent[i], parent)
def union_set(i: int, j: int, parent: list[int], size: list[int]) -> None:
int = find_parent(i, parent)
parent_i: int = find_parent(j, parent)
parent_j:
if parent_i == parent_j:
return
if size[parent_i] > size[parent_j]:
+= size[parent_j]
size[parent_i] = parent_i
parent[parent_j] else:
+= size[parent_i]
size[parent_j] = parent_j
parent[parent_i]
for _ in range(int(input())):
int = int(input())
n: list[int] = list(map(int, input().split()))
p:
list[int] = list(range(n))
parent: list[int] = [1] * n
size:
for i, pi in enumerate(p):
- 1, parent, size)
union_set(i, pi
for i in range(n):
= find_parent(i, parent)
parent[i]
list[int] = []
result: for i in range(n):
result.append(size[parent[i]])print(" ".join(str(x) for x in result))
When I think about the operations as a graph, it consists of only one-line paths. I think I'm asked to count the number of groups, which are paths in this case, so the Disjoint Set Union sounds like a good option.
After constructing the graph, I simply iterate over all the previous names and extract the ones with the largest length from their leader/parent.
There might be a more straightforward way to solve this problem, but since my focus today is to get familiar with Disjoint Set Union, I would skip looking for alternatives for now.
My solution ended up being one that is a bit different from the typical implementation of a Disjoint Set Union, but it was good to explore how the algorith, can be applied to this different case.
"""
def find_parent_helper(s: str, parent: dict, count: int) -> tuple[int, int]:
if parent[parent[s]] == parent[s]:
return parent[s], count
parent, count = find_parent(parent[s], parent)
return parent, count + 1
def find_parent(s: str, parent: dict) -> tuple[int, int]:
return find_parent_helper(s, parent, 0)
The recursive approach got Run Time Error, that may be due to the limit of recursions.
The max size of the recursion stack is 1000, so it looks fine tho.
"""
def find_parent(s: str, parent: dict) -> tuple[int, int]:
int = 0
count: str = s
curr: while parent[parent[curr]] != parent[curr]:
= parent[curr]
curr += 1
count return parent[curr], count
int = int(input())
q: dict = {}
parent: for _ in range(q):
= input().split()
old, new = new
parent[old] = new
parent[new]
dict[str, tuple[str, int]] = {} # new: (old, count)
result: for old in list(parent.keys()):
= find_parent(old, parent)
new, count if new not in result or count > result[new][1]:
= (old, count)
result[new]
print(len(result))
for new, old_info in result.items():
print(f"{old_info[0]} {new}")
This problem looks like a typical DFS problem to visit all nodes.
This solution should be correct, but it seems Python is not good at recursion, resulting in a Run Time Error in a later set of test cases.
I have liked recursions since I touched OCaml in univ because I only have to worry about one step at a time (divide and conquer). Deprived of the option makes me sad. It is said that I should use BFS instead of DFS.
def dfs(
int,
row_i: int,
col_i: int,
row_len: int,
col_len: list[list[int]],
a: list[list[bool]],
visited: -> int:
) = True
visited[row_i][col_i]
int = 0
adj_volume: if row_i > 0 and a[row_i - 1][col_i] != 0 and not visited[row_i - 1][col_i]:
+= dfs(row_i - 1, col_i, row_len, col_len, a, visited)
adj_volume if (
< row_len - 1
row_i and a[row_i + 1][col_i] != 0
and not visited[row_i + 1][col_i]
):+= dfs(row_i + 1, col_i, row_len, col_len, a, visited)
adj_volume if col_i > 0 and a[row_i][col_i - 1] != 0 and not visited[row_i][col_i - 1]:
+= dfs(row_i, col_i - 1, row_len, col_len, a, visited)
adj_volume if (
< col_len - 1
col_i and a[row_i][col_i + 1] != 0
and not visited[row_i][col_i + 1]
):+= dfs(row_i, col_i + 1, row_len, col_len, a, visited)
adj_volume
return a[row_i][col_i] + adj_volume
for _ in range(int(input())):
= map(int, input().split())
n, m
list[list[int]] = []
a: for _ in range(n):
list(map(int, input().split())))
a.append(
list[list[bool]] = []
visited: for _ in range(n):
False] * m)
visited.append([
int = 0
largest_volume: for row_i in range(n):
for col_i in range(m):
if visited[row_i][col_i] or a[row_i][col_i] == 0:
continue
= max(largest_volume, dfs(row_i, col_i, n, m, a, visited))
largest_volume
print(largest_volume)
This approach iterates over all the coordinates and checks the two adjacent coordinates (the next column and the next row). It will merge the graphs if they are in the same lake and update their lake size.
We only need the largest size so that the final answer will be the maximum size among the lake sets.
import copy
def find_parent(
tuple[int, int], parent: list[list[tuple[int, int]]]
coord: -> tuple[int, int]:
) tuple[int, int] = parent[coord[0]][coord[1]]
dparent: = dparent[0], dparent[1]
dparent_row, dparent_col if parent[dparent_row][dparent_col] == dparent:
return dparent
return find_parent((dparent_row, dparent_col), parent)
def union_set(
tuple[int, int],
coord1: tuple[int, int],
coord2: list[list[tuple[int, int]]],
parent: list[list[int]],
size: -> None:
) = find_parent(coord1, parent)
parent1 = find_parent(coord2, parent)
parent2
if parent1 == parent2:
return
= parent1
row1, col1 = parent2
row2, col2 if size[row1][col1] > size[row2][col2]:
+= size[row2][col2]
size[row1][col1] = parent1
parent[row2][col2] else:
+= size[row1][col1]
size[row2][col2] = parent2
parent[row1][col1]
for _ in range(int(input())):
= map(int, input().split())
row_len, col_len list[list[int]] = []
a: for _ in range(row_len):
list(map(int, input().split())))
a.append(
list[list[int]] = copy.deepcopy(a)
size: list[list[tuple[int, int]]] = []
parent: for row_i in range(row_len):
= []
_row for col_i in range(col_len):
_row.append((row_i, col_i))
parent.append(_row)
for row_i in range(row_len):
for col_i in range(col_len):
if a[row_i][col_i] == 0:
continue
if row_i < row_len - 1 and a[row_i + 1][col_i] > 0:
+ 1, col_i), parent, size)
union_set((row_i, col_i), (row_i if col_i < col_len - 1 and a[row_i][col_i + 1] > 0:
+ 1), parent, size)
union_set((row_i, col_i), (row_i, col_i
int = 0
largest_volume: for row_i in range(row_len):
for col_i in range(col_len):
= max(largest_volume, size[row_i][col_i])
largest_volume print(largest_volume)
I think this is the last Codeforces problem that is tagged “dsu” and has rating <= 1100. Great job!
The most naive way (I can think of) to achieve real-time timeline updates is to send requests to the Web API periodically, but it costs a lot of internet connections, and it's not actually real-time-updates.
streaming API methods - Mastodon documentation[1]
Subscribe to server-sent events for real-time updates via a long-lived HTTP connection or via WebSocket.
It seems there are two ways to achieve the feature; long-lived HTTP connections and WebSocket.
Connection management in HTTP/1.x - HTTP | MDN[2]
Short-lived connections have two major hitches: the time taken to establish a new connection is significant, and performance of the underlying TCP connection gets better only when this connection has been in use for some time (warm connection).
To ease these problems, the concept of a persistent connection has been designed, even prior to HTTP/1.1. Alternatively this may be called a keep-alive connection.
A persistent connection is one which remains open for a period of time, and can be reused for several requests, saving the need for a new TCP handshake, and utilizing TCP's performance enhancing capabilities. This connection will not stay open forever: idle connections are closed after some time (a server may use the Keep-Alive header to specify a minimum time the connection should be kept open).
About HTTP server-sent events (Mastodon documentation) => [1]
Your application can use a server-sent events endpoint to receive updates in real-time. Server-sent events is an incredibly simple transport method that relies entirely on chunked-encoding transfer; in other words, the HTTP connection is kept open and receives new data periodically.
The WebSocketAPI (WebSockets) - Web APIs | MDN
The WebSocket API makes it possible to open a two-way interactive communication session between the user’s browser and a server. With this API, you can send messages to a server and receive responses without having to poll the server for a reply.
Establishing a WebSocket connection => [1]
Returns: Stream of Event Open a multiplexed WebSocket connection to receive events.
It's embarassing that I was not aware of the functionalities, but it should be fine as long as I'm making progress. Focusing on the process is more important than results.
I want to do more research, but my brain is shutting down. I'll continue later.
Spaghetti 600 Protein shake 300 Rice 750 Mashed potatoes 150
Total 1800 kcal
4 mi run, 3 mi walk, leg raise, push-ups
MUST:
TODO: