I decided not to complicate my tests

Structured collection of numerical data for analysis and research.
Post Reply
poxoja9630
Posts: 10
Joined: Sun Dec 22, 2024 5:36 am

I decided not to complicate my tests

Post by poxoja9630 »

Here is the list I came up with: Python Copy the code requests = [ '/api/query/articles?blog=twilio', '/api/query/days?blog=twilio', '/api/query/products?blog=twilio', '/api/query/teams?blog=twilio', '/api/query/authors?blog=twilio', '/api/query/languages?blog=twilio', '/api/query/human_languages?blog=twilio', '/api/query/countries?blog=twilio', '/api/query/article_years?blog=twilio', ] These URLs then need to be supplemented with additional query string arguments. The startand arguments endare required in all requests, as they specify the requested time period. I found that this had a big impact on response times, so I tried different durations. There are a number of query string arguments that can be used to implement filters. The argument blogI included in the URLs above selects the Twilio blog, which has the most traffic.

I decided not to complicate my tests with additional filters, because when whatsapp philippines number looking at the usage logs, I noticed that most user queries did not have any filters. With this list, I created a short Python function that executes these queries and saves the duration of each of them in a dictionary results : Python Copy the code import random import subprocess from timeit import timeit requests = [ # ... ] results = {url: [] for url in requests} def test(server, apikey, start, end): requests_copy = requests[:] random.shuffle(requests_copy) for url in requests_copy: t = timeit(lambda: subprocess.



Image

check_call( ['curl', '-f', f'{server}{url}&start={start}&end={end}', '-H', f'Authorization: Bearer {apikey}'], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, ), number=1) results[url].append(t) The function test()makes a copy of the list requests, reorganizes it randomly, and then executes the request with curlas a subprocess. It uses the timeit()Python standard library function to measure the time it takes for each request to return a response, and then adds this measurement to the dictionary results, under the corresponding URL. The reason I randomize the list is that I intend to run multiple instances of this function in parallel to simulate concurrent clients. It is convenient for the function to iterate over the queries in random order, as this ensures that the database has a variety of queries at any given time, rather than receiving multiple instances of the same query.
Post Reply