-
Notifications
You must be signed in to change notification settings - Fork 687
Corrige api de participantes do PIX com base no novo formato de csv #751
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Corrige api de participantes do PIX com base no novo formato de csv #751
Conversation
|
@RodriAndreotti is attempting to deploy a commit to the BrasilAPI Team on Vercel. A member of the Team first needs to authorize it. |
|
|
@RodriAndreotti , consegue resolver os conflitos? E parece que tem mais mudanças do que o necessário, por exemplo arquivo de cep. Acho que devemos colocar como v2, e botar alguma mensagem de erro na v1 dizendo que foi descontinuada |
Vixi, só vi a mensagem agora, kkkk Foi mal. Cara no caso do CEP foi o lint, eu não mexi no arquivo... rs Sobre conflitos: Vou dar uma olhada |
136309b to
94c1478
Compare
|
Conflitos resolvidos! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR updates the PIX participants API to handle a new CSV format from the Brazilian Central Bank (BCB). The changes include modifications to URL structure, CSV parsing logic, encoding handling, and weekend data retrieval logic. However, the PR introduces several critical issues including breaking API contract changes, incorrect data mapping, and inconsistent header validation.
Key Changes:
- Updated BCB data source URL to new format with different path structure
- Added encoding detection and TextDecoder for proper charset handling
- Modified CSV header validation to use subset matching instead of exact equality
- Implemented weekend handling logic to fetch data from previous business days
- Removed
inicio_operacaofield (set to null) as BCB no longer provides this data
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 10 comments.
| File | Description |
|---|---|
| services/pix/participants.js | Core changes to CSV parsing, encoding handling, header validation, and data mapping to accommodate new BCB CSV format |
| pages/api/pix/v1/participants.js | Added weekend detection logic to fetch previous day's data when BCB doesn't update on Saturdays/Sundays |
| pages/docs/doc/pix.json | Updated API documentation to reflect that inicio_operacao field may be empty and data is from current or last business day |
| .nvmrc | Added Node.js version specification (v20) for consistent development environment |
| nome_reduzido: data[1], | ||
| modalidade_participacao: data[8], | ||
| tipo_participacao: data[6], | ||
| inicio_operacao: null, |
Copilot
AI
Dec 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The API contract is being broken. The field inicio_operacao was previously guaranteed to return a date string, and now it returns null for all participants. This is a breaking change that affects all API consumers expecting this field.
According to the documentation update, BCB stopped providing this data in November 2025, but the solution should maintain backward compatibility. Consider one of these approaches:
- Create a new API version (v2) with the updated structure
- Keep trying to parse the field when available instead of hardcoding
null - Clearly document this as a breaking change and coordinate with stakeholders
This violates the fundamental principle: "SEMPRE mantenha compatibilidade retroativa em endpoints existentes".
| return { | ||
| ispb: data[0], | ||
| cnpj: data[1], | ||
| nome: data[2], | ||
| nome_reduzido: data[3], | ||
| modalidade_participacao: data[4], | ||
| tipo_participacao: data[5], | ||
| inicio_operacao: parseToDate(data[6]), | ||
| ispb: data[2], | ||
| nome: data[1], | ||
| nome_reduzido: data[1], | ||
| modalidade_participacao: data[8], | ||
| tipo_participacao: data[6], |
Copilot
AI
Dec 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The data mapping appears incorrect. The code defines only 4 expected headers but then accesses array indices beyond those headers (data[6], data[8]).
The expectedHeaders array has:
- Index 0: 'nomereduzido'
- Index 1: 'ispb'
- Index 2: 'modalidadedeparticipaçãonopix'
- Index 3: 'tipodeparticipaçãonospi'
But the mapping uses data[2], data[1], data[6], data[8], which suggests the actual CSV has more columns than what's being validated. Either:
- Update expectedHeaders to include all columns in the CSV, or
- Adjust the mapping indices to match the 4 validated headers
This mismatch will cause incorrect data to be returned or potential runtime errors.
| ispb: data[2], | ||
| nome: data[1], | ||
| nome_reduzido: data[1], |
Copilot
AI
Dec 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The field nome_reduzido is being set to the same value as nome (both use data[1]). This appears to be incorrect since these fields should represent different data (full name vs. short name).
According to the expectedHeaders, 'nomereduzido' is the first column (index 0 after splitting). The mapping should likely be:
- nome_reduzido: data[0] (nomereduzido)
- ispb: data[1] (ispb)
Please verify the actual CSV structure and map the fields correctly.
| try { | ||
| return lines | ||
| .map((line) => line.split(';')) | ||
| .filter(([ispb]) => ispb) |
Copilot
AI
Dec 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The filter filter(([ispb]) => ispb) is filtering by the first element of each row (index 0), but according to the header mapping, ISPB is at index 1, not index 0. This means the filter is checking the wrong column.
This should be updated to match the actual column position of ISPB in the CSV, or if filtering by the first column is intentional, the logic should be documented and verified against the actual CSV structure.
| .filter(([ispb]) => ispb) | |
| .filter(([, ispb]) => ispb) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
De fato ele tem razão @RodriAndreotti , ou a gnt ta viajando?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.
| const today = new Date(); | ||
| // Handle weekend cases, as BCB does not update data on Saturdays and Sundays | ||
| let response; | ||
| if (today.getDay() === 0 || today.getDay() === 6) { | ||
| response = await getPixParticipants(false, today.getDay() === 0 ? 2 : 1); | ||
| } else { | ||
| response = await getPixParticipants(); | ||
| } |
Copilot
AI
Dec 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The weekend handling logic has a subtle issue. When checking today.getDay() === 0 (Sunday), it tries to fetch data from 2 days before. However, the actual parameter is not being passed to obtainPixParticipantList, so on line 20 the fallback logic will call getPixParticipants(false) which defaults to daysBefore = 1. This means if there's a 404 on Sunday for the 2-days-ago file, the fallback will only go back 1 day (Saturday) instead of trying other dates. Consider passing the appropriate daysBefore value to the fallback as well.
| const headerSet = new Set(headers); | ||
| return requiredHeaders.every((h) => headerSet.has(h)); | ||
| } | ||
|
|
Copilot
AI
Dec 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The daysBefore parameter was added but it's not clear what its purpose is just from the signature. Consider adding JSDoc documentation to explain when and why this parameter should be used, especially since it's part of the weekend handling logic introduced in this PR.
| /** | |
| * Busca o arquivo CSV de participantes do Pix diretamente do Banco Central. | |
| * | |
| * @param {boolean} [fromToday=true] Define se a data usada na URL será a data de hoje. | |
| * Quando `false`, a data será calculada a partir de `daysBefore`. | |
| * Útil para cenários em que o arquivo do dia ainda não foi publicado | |
| * (por exemplo, fins de semana ou feriados) e é necessário buscar o | |
| * arquivo do último dia útil disponível. | |
| * @param {number} [daysBefore=1] Quantidade de dias a subtrair da data atual | |
| * quando `fromToday` é `false`. Por exemplo, `daysBefore = 1` busca o | |
| * arquivo do dia anterior, `daysBefore = 3` pode ser usado em uma | |
| * segunda-feira para buscar o arquivo da sexta-feira anterior. | |
| * @returns {Promise<string>} Conteúdo do arquivo CSV em formato de string já | |
| * decodificado. | |
| */ |
| const headerSet = new Set(headers); | ||
| return requiredHeaders.every((h) => headerSet.has(h)); |
Copilot
AI
Dec 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The isEqual function has been changed to check if required headers are present in the headers array, but this is less strict than the original implementation. The original checked for exact equality (same order and length), while the new version only checks if the required headers exist somewhere in the array. This means the function will pass even if there are extra columns or if columns are in a different order. However, the data mapping on lines 78-83 relies on specific column indices, so if columns are reordered, the data will be mapped incorrectly. The validation should ensure the headers match exactly in order and position.
| const headerSet = new Set(headers); | |
| return requiredHeaders.every((h) => headerSet.has(h)); | |
| if (requiredHeaders.length !== headers.length) { | |
| return false; | |
| } | |
| return requiredHeaders.every((header, index) => header === headers[index]); |



Corrige api de participantes do PIX com base no novo formato de csv do BCB