All Meetings
491 meetings across all time
UBM Demo Updates
## Summary The meeting focused on resolving technical and configuration tasks related to a SOC 2 compliance audit and addressing specific feedback for a UBM demo account. Key discussions centered on preparing the demo environment with appropriate data and settings for an upcoming demonstration, while ensuring ongoing compliance work proceeds without disruption to other customers. ### SOC 2 Compliance Progress Updates on two parallel tracks of the SOC 2 compliance effort were reviewed. - **Vendor Management**: The review and documentation of all third-party vendors used by the company is largely complete, with only a few remaining items needing final verification. - **GCP and Avanta Integration**: The task to integrate Google Cloud Platform (GCP) with Avanta has been assigned, and initial work has begun. It was suggested to create a dedicated internal Slack channel for faster, more focused discussions on such compliance and technical tasks among the core team. ### UBM Demo Account Configuration and Feedback A significant portion of the meeting was dedicated to addressing feedback from a user (likely a customer success or sales representative) regarding the UBM demo account. The goal is to optimize the account for effective product demonstrations. - **Cleaning Up Superfluous Data**: It was confirmed that custom location attributes currently visible in the demo account are unnecessary and can be removed to present a cleaner interface. - **Data Population and Analytics Default View**: A primary concern was that the main analytics report initially loads with sparse or outdated data, which is not ideal for demos. A script is being prepared to generate and import more recent data (for 2024 and 2025) from other customers into the demo account. This population of contemporary data is expected to resolve the issue of empty default views, making a default filter change to 36 months unnecessary. - **Handling Data Validations**: There was a request to re-enable bill validation errors in the demo to showcase the platform's error-checking capabilities. However, a technical limitation was noted: simply re-parsing existing bills will not resurrect old, resolved errors. The proposed solution is twofold: first, re-enable the validation system, and second, ensure the newly imported data contains bills with unresolved errors. The demo team was advised to preserve a set of known "error" bills without editing them to consistently have validation examples to show. - **Customizing Default Report Views**: Feedback indicated that the default expanded "tree view" in a report was cluttered for demo purposes. A significant technical constraint was identified: the main analytics report and its default filters are shared globally across all customers. Therefore, any change to the default view (like collapsing the tree) would impact every client, which is not acceptable. A future solution discussed involves duplicating the main analytics report to create a demo-specific version that can have custom default filters without affecting other users, but this is not currently implemented. ### Data Validation and Error Display A detailed technical discussion clarified the mechanics of showing validation errors within the demo environment. - **Re-enabling vs. Re-processing**: The distinction was made between reactivating the validation system and the act of re-parsing existing bills. Turning validations back on will flag new errors in incoming data. - **Limitation with Existing Data**: For bills where errors were previously identified and marked as "resolved," re-parsing them will not make those specific errors reappear, as the system remembers their resolved state. This means simply reprocessing old data will not restore the validation display that the demo team found useful. - **Strategy for Demos**: The effective strategy is to ensure the demo account contains a fresh set of data (via the new import) with inherent, unresolved validation errors. This will provide authentic examples of the platform's validation features during demonstrations.
Validation and Sync Review
## Summary The meeting focused on reviewing the current development status of several key features within a credential and billing validation system, with particular attention to bug fixes, new functionality, and data synchronization processes. ### Credential Validation System Updates A critical bug fix was deployed to the production environment to correct the logic for matching credentials. Previously, the system could incorrectly match records based only on customer, vendor, and password, bypassing the essential account number check. The fix now ensures all matching is done correctly using the combination of customer, vendor, and account number, which resolves issues with "mismatching password" and "mismatching username" flags. - **Fix for Mismatched Account Number Logic**: The matching algorithm was updated to enforce validation against the account number, a crucial field for accurate credential pairing. - **Removal of Redundant Filter**: An additional task was identified to remove the "mismatching account number" filter from the user interface's "needs action" list, as this state is not actionable for users in the current workflow. ### Archive and Comments Functionality Development is underway for an archive feature, which is intended as a soft-delete mechanism for invalid credentials. This functionality is seen as a necessary stepgap before implementing a full bi-directional sync with external systems. The related ability to add comments to credentials was also discussed to help teams log reasons for actions taken during their review process. - **Archive as a Soft Delete**: The archive feature will allow users to mark credentials as invalid without permanently deleting them, creating an audit trail and preparing data for future cleanup tasks in other systems. - **Comments for Auditability**: While comments can be added, their primary use is expected to be within the validation interface itself, rather than on a separate management page, to provide immediate context during the review workflow. ### Failed IPS Records Investigation An investigation was conducted into a specific error state where bill records show a status of "failed IPS complete." In this state, the pre-audit JSON data becomes empty, preventing users from taking corrective action. The root cause was traced to a specific step in the data processing pipeline. - **Root Cause Identified**: The issue occurs after the IPS extraction completes but a subsequent processing step (ND JSON DTO) fails. A bug was found where the system uses a `save` operation instead of an `update`, which inadvertently clears the pre-audit JSON data. - **Proposed Solution and Impact**: A code fix has been prepared to correct this logic. Once reviewed and deployed, it will prevent the issue for future records. For the 17 existing production records stuck in this state, the fix may allow them to be re-processed correctly. ### User Interface Enhancements for Validation Work is in progress to add new filtering capabilities to the bill validation screen, improving the team's ability to locate and manage specific records efficiently. - **New Filter Criteria**: Development is ongoing to add dropdown filters for **Client ID**, **Vendor**, and **Bill ID**. This will significantly enhance the searchability of records compared to the current interface. - **Implementation Standard**: The new filters are required to follow the same dropdown format already established in other parts of the application, ensuring a consistent user experience. ### Account Synchronization and File Uploads Significant progress was reported on the automatic sync of account data from an external platform (Arcadia) and the subsequent upload of statement documents to SharePoint. Initial testing revealed important data discrepancies that require resolution. - **SharePoint Upload Structure**: The automated process successfully creates a structured folder hierarchy in SharePoint: a main folder for the Client ID, subfolders by date, and individual statement files named with client, account, and statement details. Future improvements will include adding the client name and statement date to the file/folder names for better readability. - **Data Discrepancy Analysis**: During testing, a major discrepancy was found between the number of account credentials stored internally (~2600) and the number successfully synced from Arcadia (~1700 for credentials, ~1200 for accounts). This highlights a potential gap in data integrity. - **Strategic Approach to Validation**: The team emphasized the importance of thoroughly validating the sync logic and understanding these discrepancies for the first client before scaling the process. This careful approach is crucial to prevent compounding errors for future clients. A technical adjustment was also made to process records in smaller batches to avoid system timeouts during the sync operation.
Channel-Based Communication
## Summary ### Transition from Direct Messages to Structured Channels The core announcement involves a strategic shift in internal communication, moving away from relying on direct messages (DMs) for project and operational discussions towards utilizing dedicated Teams channels. This migration will begin with the "DS product ops" channel and is expected to expand to other communication flows in use. The primary motivation is to improve the clarity, organization, and traceability of team discussions. ### Key Benefits of Using Dedicated Channels Adopting proper channels is presented as a solution to several limitations inherent in DM-based communication. The central advantage is enabling more coherent and referenceable conversations. - **Improved Discussion Threading:** Channels facilitate organized threads around specific topics, which prevents the common issue of fragmented replies and makes it easier to follow the entire context of a conversation without losing reference to the original point. - **Enhanced Tool Integration:** Structured channels offer better compatibility and integration potential with other software tools used by the team, paving the way for a more connected and automated workflow in the future. ### Practical Tips for Managing Teams and Channels To ease the transition and help team members manage the new structure effectively, several practical tips for using Microsoft Teams were shared. These focus on personal organization and reducing visual clutter. - **Customizable Sections for Organization:** Users can create custom sections within their Teams interface (e.g., "Favorites," "Later") to categorize and prioritize channels and chats, allowing for a personalized workspace that suits individual workflows. - **Flexible Channel Visibility:** The interface allows users to hide or show channels as needed. This is particularly useful for managing a large number of channels, as users can selectively display only the most relevant ones and collapse sections to maintain a clean and focused view. - **Dynamic Member Management:** Channels support the flexible addition of members as required, ensuring the right people have access to relevant discussions as initiatives evolve. ### Implementation and Forward Path The change will be implemented gradually, starting with the migration of specific discussions from DMs to the "DS product ops" channel, with a plan to extend this model to other areas. The announcement concludes with an open invitation for feedback on the new communication approach.
Tool Migration Update
## Summary The meeting consisted of a single, brief announcement regarding a significant upcoming change in the team's communication tools. ### Announcement of Communication Platform Migration A major transition from Microsoft Teams to a new, unspecified platform was announced. The core announcement was the decision to move away from Microsoft Teams as the primary communication tool. The announcement was delivered with a sense of deliberate action, though the specific replacement platform was not named during this initial communication. This change signifies a strategic shift in how the team will collaborate and communicate moving forward. ### Rationale for the Change The underlying reasons and expected benefits for this migration were not detailed in this initial briefing. While the "what" was clearly stated-moving from Teams-the "why" behind this decision was not elaborated upon. This leaves the strategic drivers, such as seeking improved functionality, better integration with other workflows, cost considerations, or enhanced user experience, as topics for future discussion or follow-up communications. ### Transition Timeline and Process No specific timeline or detailed migration plan was provided during this announcement. The announcement served as a preliminary heads-up rather than a detailed rollout plan. Key logistical details, including the projected start date, the phased migration approach, data transfer procedures, and the final cut-over date from the old to the new system, remain to be communicated. ### Implications for Team Workflow The change will inevitably impact daily operations, though the exact nature of the impact is still undefined. Moving to a new communication platform will affect various aspects of team workflow. This includes routine activities like instant messaging, video conferencing, file sharing, and project channel management. The announcement implicitly signals a future period of adaptation and learning as the team familiarizes itself with the new tool's features and interface. ### Next Steps and Future Communication The announcement concluded without outlining immediate action items or a clear channel for questions. This initial message appears to be the first step in a longer change management process. The lack of detailed next steps suggests that follow-up communications will be required to address team questions, provide training resources, and establish a concrete project plan for the migration itself.
Team Check-In
## Summary The provided transcript is extremely limited and does not contain substantive discussion from which a meaningful meeting summary can be generated. The single line of dialogue present appears to be an opening greeting or small talk, which, according to the provided instructions, should be excluded from the summary. ### Insufficient Content for Detailed Summary **The transcript does not contain discussable meeting topics or decisions.** The available content is a fragment that does not allow for the identification of key themes, action items, or conclusions as required by the summary template. - **Transcript Length and Depth:** The transcript consists of only one short speaker turn, which is insufficient to reconstruct the context, purpose, or outcomes of any meeting. - **Absence of Discussed Topics:** No project updates, strategic discussions, problem-solving, data analysis, or planning conversations are present in the provided text. Therefore, no level-2 or level-3 headings for specific topics can be created. - **Implication for Summary Fidelity:** Generating a detailed, fluent, and engaging summary that follows the prescribed template is not possible without the actual content of the meeting dialogue. The summary would be entirely speculative and not based on the evidentiary transcript.
Demo Environment Alignment
## Summary The meeting primarily focused on optimizing the demo environment for the UBM platform to enhance sales effectiveness, addressing several technical and data-related issues that impact demonstration quality. A significant portion was also dedicated to discussing pricing strategy and feature development, particularly around handling customer data in Excel format. ### Demo Environment Optimization This discussion centered on making the demo account more polished and effective for sales presentations by removing non-essential elements and ensuring data displays correctly. - **Removing Non-Core Features:** To streamline demos and focus on core UBM value propositions, it was agreed to hide the SAM module and the Sustainability tab from the demo environment by default. The Sustainability tab, in particular, can create confusion during sales pitches for the core platform and often raises more questions than it answers regarding data update frequency and methodology. It can be revealed selectively if a prospect expresses a specific interest in emissions tracking. - **Fixing Data Validation Displays:** A key issue was identified where bill-level error validations (e.g., alerts for consumption spikes) were not displaying, showing only generic system information instead. This removes a useful storytelling element from demos. The action is to re-enable these validations specifically for the demo account and re-process historical bills to restore this functionality. - **Cleaning Up Location Attributes:** The demo account's location attributes contain numerous legacy fields (e.g., EE Measure 1, ECM2) that clutter the interface and are irrelevant for demonstrations. These will be removed to present a cleaner, more focused view to prospects. ### Platform Performance and User Experience Challenges related to system speed and default settings within the platform were reviewed, as they directly affect demo fluidity and first impressions. - **Addressing Slow Load Times:** Acknowledged performance issues, particularly when loading the portfolio view, were attributed to current server architecture and traffic patterns. A migration to a new Azure infrastructure scheduled for August/September is expected to resolve these through optimized load balancing. - **Adjusting Default Report Views:** Multiple reports (e.g., Bill Health, Anomalies, Light Bills) were found to default to date ranges with no data (e.g., 2025, 2026), presenting blank or unhelpful views. Solutions under consideration include updating the demo account with more recent mock data or changing the default date settings (e.g., to "last 36 months") specifically for the demo environment to ensure useful data is displayed immediately. - **Refining Filter Defaults:** In the analytics section, filters are pre-populated upon login, requiring manual clearing to start a clean analysis. The plan is to adjust these global defaults to start with a blank slate, improving the demo flow. ### Data Reporting and Integrity The conversation highlighted the need for accurate and presentable data within reports to effectively demonstrate the platform's analytical capabilities. - **Ensuring Report Data is Current:** For reports to be compelling, they need to reflect recent time periods. The team is evaluating methods to populate the demo account with data that appears current, such as backdating existing data, to make analytics and health reports immediately valuable during a demo. - **Clarifying Excel Data Ingestion:** A major strategic topic was how to handle prospects who require ingestion of utility data from Excel files instead of PDF bills. The consensus is to develop a standardized, robust process with clear rules of engagement, including a strict required format, SFTP setup, defined error-handling responsibilities, and an associated per-meter fee. This aims to move away from one-off custom solutions. ### Sales Strategy and Product Development The discussion expanded into broader sales enablement and product roadmap topics to strengthen the overall sales pitch and address common prospect inquiries. - **Identifying Key Differentiators:** To bolster the sales pitch, core platform strengths were identified, particularly its user-friendly interface that allows for intuitive drilling from high-level portfolio views down to granular bill details without overwhelming the user-a contrast to competitors that may present overly complex data dumps. - **Developing a Shareable Roadmap:** There is a recognized need for a sanitized, high-level product roadmap that the sales team can reference with prospects. The goal is to create a monthly visual update highlighting major upcoming features (like interval data integration) in customer-friendly language, without exposing internal operational tickets. - **Pricing Model Evolution:** The discussion emphasized the need to formalize pricing for non-standard services like Excel ingestion to ensure profitability and account for ongoing support costs. The approach will involve creating a standard offering with clear associated costs, moving away from highly customized, and potentially loss-leading, agreements.
Credential Cleanup Overview
## Summary ### Overview of the Credentials Validation Screen A walkthrough was provided for the Production DSS application, specifically focusing on its credentials validation functionality. The primary purpose is to manage and clean up credential discrepancies between different data sources. - **Application Access and Purpose:** The demonstration was conducted within the Production DSS app, which houses the total inventory of credentials. The immediate focus is on the cleanup process for specific sets of credentials, such as those for Vectra. ### Core Function: The "Needs Action" Filter The central tool for the cleanup task is the "Needs Action" filter. Activating this filter isolates credentials that require attention by excluding any that are already consistent and valid. - **Filtering Logic:** Clicking "Needs Action" and then "Search" will display only credentials falling into four specific discrepancy categories: those with mismatching usernames, mismatching passwords, accounts existing in only one source (either Data Services or UBM), and mismatched clean accounts. - **Critical Filter Behavior:** It is crucial to understand that adding multiple filters works on an **AND** logic, not an OR logic. Applying additional filters concurrently will narrow results further and likely return zero records, so it is recommended to use only the "Needs Action" filter initially for the broadest view of issues. ### Types of Credential Discrepancies The filter reveals several common scenarios that need resolution, each requiring a specific action based on the nature of the discrepancy. - **Account in a Single Source:** A credential may exist in only one system, such as Data Services (DS) but not in UBM. In this case, the available action is to set the present system (e.g., DS) as the source of truth. - **Mismatched Details Between Sources:** More frequently, an account exists in both systems but with slight variations in details, such as username spelling. These require manual review to determine the correct information and set the appropriate source of truth. - **Handling Invalid Credentials:** For credentials that are invalid, archived, or deleted, a future archiving functionality is planned. For the current process, such records can be noted and potentially ignored for the time being. ### Process for Resolving Discrepancies The interface provides a methodical workflow for investigating and resolving each flagged credential. - **Review and Action:** For each credential listed, users can view associated information like the URL and account number. The key step is to click "Resolve Discrepancy," which opens the interface to address the specific issue. - **Setting Source of Truth:** The resolution process involves designating which system-Data Services or UBM-holds the correct, authoritative data for that credential. - **Adding Context:** Users have the ability to add notes to a credential, providing context or rationale for the resolution decision, which aids in auditability and future reference. ### Current Limitations and Future Updates The tool is operational for core cleanup tasks but is acknowledged to be a work in progress with enhancements on the way. - **Immediate Workflow:** The current guidance is to rely on the "Needs Action" filter as the primary tool for identifying credentials requiring cleanup. - **Planned Enhancements:** Development is ongoing, with updates promised for the future. Specifically mentioned is the forthcoming ability to archive invalid or deleted credentials directly within the system, which is not yet available in the production environment. - **Interface Notes:** During the demo, minor interface interactions were noted, such as controls for Zoom and Teams, confirming the application is designed for detailed inspection and collaborative review.
Pricing and Ingestion
## Summary ### Pricing Strategy and Historical Context The meeting opened with a discussion about the company's current and historical approach to pricing and customer billing. Historically, pricing was not standardized, often requiring direct contact to sell, with the primary goal being a 20% margin. Currently, the average margin achieved is around 35%, which is considered acceptable but not ideal for a SaaS business model. Broker fees were clarified as an ongoing customer acquisition cost, not a one-time marketing expense, where a dollar per meter is paid to brokers for the lifetime of the customers they bring in. - **Lack of bottom-up cost analysis:** A significant historical gap was identified: no one has ever calculated the fundamental cost per service, per bill, or amortized development spend. This absence of granular cost data has made precise pricing and margin optimization difficult. - **Pricing as a reaction, not a strategy:** The initial pricing model was described as lacking foresight, established simply to secure sales rather than as part of a strategic, product-led growth plan. ### Operational Challenges with Data Ingestion A core operational bottleneck centers on how customers submit invoice data, specifically the handling of Excel files. Many customers request the ability to submit formatted Excel files via SFTP, but the company has historically refused, leading to lost deals and reputational damage as being unresponsive to customer needs. - **High cost of manual processing:** The existing, semi-manual process for handling non-standard Excel files is performed by the data services team. While previously tolerable at high volumes due to lower fixed costs, the opportunity cost is now higher as this team could be focused on more valuable work like data cleanup or UBM operations. - **The need for a scalable solution:** The discussion strongly advocated for moving away from custom, one-off solutions for individual customers. Instead, any new capability for Excel ingestion should be built as a robust, standardized feature that can be offered to multiple customers, thereby turning a historical pain point into a scalable service. - **Shifting cost responsibility:** To improve margins, there was consensus on the need to charge customers explicitly for any special handling or support required for data ingestion, such as a per-meter or per-bill fee, rather than absorbing these operational costs. ### Product Development Priorities and Technical Debt The conversation highlighted critical product features that have been long-requested but unimplemented, creating significant operational overhead. The amount of manual work performed pre- and post-processing on the platform is not reflected in the pricing, which is a key reason margins are not at typical SaaS levels. - **The SFTP upload/download solution:** The most direct solution to many data ingestion issues would be a native feature within UBM for customers to upload bills and download payment files themselves. This feature has been on the roadmap for years, with past demos existing, but it was never fully implemented due to ancillary issues like file naming conventions and system design hurdles. - **Breaking the cycle of operational debt:** A vicious cycle was identified: the development team cannot build new features because they are too busy handling operational issues, and they are busy with operations because sales has sold custom, non-standard solutions. Breaking this chain is imperative for product progress. - **Empowering the customer:** The philosophy emerged that the platform should push more responsibility onto the customer for simple, repetitive tasks (e.g., drag-and-drop uploads, verifying data), allowing the company to focus its paid efforts on processing and platform intelligence. ### Customer Feedback, Reputation, and Strategic Alignment The company's repeated "no" to common customer requests like Excel ingestion has been damaging, creating a perception that the development team does not listen to customer feedback. Upcoming projects require careful internal communication to manage expectations and incorporate control-focused feedback. - **Recapturing lost opportunity:** It was noted that there are likely at least ten customers from the past year who were declined and could now be re-engaged if a standardized Excel ingestion feature were available, representing a direct opportunity to recoup lost revenue. - **Pre-launch stakeholder reviews:** For upcoming features like the new credentials view, it is considered critical to get pre-emptive, friendly reviews from key internal stakeholders (like Tara in Sales Ops) to anticipate pushback and align messaging before broader communication. - **Incorporating finance and controls perspective:** The involvement of finance personnel (like Jeff Lochte) focused on controls and risk management is crucial, especially for projects like Arcadia which are seen as a panacea. Their concerns and questions must be addressed early in the planning process to ensure solutions are sound and compliant. ### Planning for Sales Enablement The final part of the discussion focused on an immediate need to update the demo account data to better showcase the platform's capabilities for sales demonstrations. This task highlights the disconnect between sales needs and development bandwidth. - **Creating representative demo data:** The goal is to generate a clean, ideal dataset that showcases key features (services, bills, fees, errors, locations, analytics) with a controlled, minimal error rate (e.g., 99% clean, 1% error) to demonstrate error-catching capabilities without overloading the demo system. - **Understanding the sales perspective:** The preparation work involves thinking from a sales persona viewpoint: mapping out the customer's starting point, desired ending point, and the specific features that bridge that gap. - **Addressing historical delays:** This data update request has been outstanding with another team for nearly a year without action, underscoring the persistent resource tension between ongoing operations and strategic enablement work.
January Review, February Plan
## **Summary of January Unplanned Work and February Planning** This meeting served as a brief check-in to address pending items from January and to review the roadmap for upcoming projects in February. The discussion focused on quantifying unplanned work, resolving external dependencies for report generation, and assessing the timeline and scope of key development initiatives. Offers for managerial escalation to unblock external partners were also extended. ## **Unplanned Work Review for January** A primary objective was to close out the accounting for unplanned engineering work completed in January. This involves categorizing and quantifying tasks that fell outside the planned sprint cycles. - **Request for Unplanned Work Data**: There was a specific request to compile a list of all unplanned work items and reports generated during January. This data is crucial for tracking team bandwidth and understanding interruptions to the planned roadmap. The expectation is that the volume of unplanned fixes was significant, similar to previous months. - **Action on Data Compilation**: The data will be sourced directly from the project management tool (Jira) to ensure accuracy. The compilation will include not just a list of tasks but also the calculated percentage of time spent on unplanned work versus planned work, providing a clear metric for the month's productivity distribution. ## **Updates on Pending Reports and External Dependencies** Progress on several reports, which depend on data from an external payment provider (PayClarity), was reviewed. The conversation centered on identifying blockers and determining if managerial intervention was needed. - **General Progress with PayClarity**: For most reports, such as those concerning *electronic payment conversions* and *SLA monitoring*, alignment on timing is proceeding without major issues. The external partner is providing the necessary data, suggesting a functional working relationship for these items. - **Specific Data Field Blocker**: One report is currently blocked because a required data field is not available from PayClarity's standard systems. A solution is being investigated on their end, and it was noted that direct follow-ups are already in progress, making further escalation seem unnecessary at this time. - **Offer for Managerial Escalation**: A standing offer for facilitation and escalation was made, following a discussion with higher management who inquired about project blockers. This offer is particularly relevant for more complex future requests that may require the external partner to undertake custom development work not aligned with their other clients' needs. - **Timeline Expectations for Pending Items**: For the reports without specific blockers, completion is anticipated to be a matter of internal review and timing coordination. The goal is to finalize and account for these reports within the January period. ## **Project Outlook for February and Beyond** The roadmap for February was briefly examined, with acknowledgment that some projects may face delays or require scope adjustment. - **Arcadia Integration Timeline**: The integration project with Arcadia is expected to spill over into March. However, a significant milestone, termed "phase 1.5," is still targeted for completion within February. This indicates a phased delivery approach is being adopted. - **Credential Management Project**: Work on the credential management system is confirmed to be on track for progress in February. - **Potential Scope Reduction**: The "interpreter layer" project is under consideration for being pushed back or potentially scrapped altogether, as a need for it may no longer exist. This reflects adaptive planning based on evolving project requirements. - **Multi-Factor Authentication (MFA) Work**: The status of MFA-related tasks is uncertain for February. Meaningful progress is not expected unless some aspects can be absorbed into the ongoing Arcadia integration work, necessitating a separate, internal discussion to determine the path forward. - **Holistic Account Health Project**: This was mentioned as a substantial undertaking, implicitly acknowledging it as a significant piece of work on the horizon without diving into detailed planning during this check-in.
Credentials Sync Updates
## Credentials Management and Filter Fixes The meeting focused on critical updates to the credentials management system. A primary issue with the "Needs Action" filter was successfully resolved, and the fix is ready for deployment to the production environment. - **Fix for the "Needs Action" Filter:** The logic for this filter was corrected to properly display all records that require attention. The filter is designed to flag any credential mismatches, such as missing credentials in one system or discrepancies in usernames and passwords, which is crucial for the team to identify and resolve issues. The corrected version has been tested and is currently deployed on the development server. - **Priority for Production Deployment:** Due to the operational need to begin credential remediation work immediately, pushing this specific fix to production was prioritized above other tasks. All other recent changes related to credential presentation were confirmed to be working correctly in production. - **Clarification on Filter Logic:** There was a discussion about whether multiple dropdown filters should use an "AND" or "OR" logic when combined. It was clarified that the current implementation uses an "AND" logic, but for the team's workflow, the correct functioning of the standalone "Needs Action" filter is the most important requirement. ## Archiving Functionality for Invalid Accounts A new feature was developed to handle accounts that are no longer valid or active within the system. - **New Archiving Capability:** A button was added to the comparison module, allowing users to manually archive specific account records. This helps in cleaning up the active view by removing outdated entries. - **Viewing Archived Records:** A corresponding dropdown filter was also implemented, enabling users to view the records that have been archived. This functionality provides a complete audit trail and prevents permanent data loss. This new archiving feature has been deployed to the development environment for testing. ## Performance and Deployment Strategy The conversation included important notes on system performance and a cautious approach to rolling out changes. - **Acknowledgment of Performance Lag:** It was noted that the credentials interface can be slow to load, likely due to the volume of records being processed. While acceptable for now, optimization was identified as a future consideration to improve the user experience. - **Rigorous Testing Protocol:** A new, more cautious deployment strategy was emphasized. The team agreed that for any significant new features or changes, they must be thoroughly held and tested in a staging environment first. Feedback from operators or other end-users should be gathered before any production release to prevent disruptions to daily work. ## UI and Data Format Corrections Several user interface and data formatting bugs were addressed and fixed. - **Calendar Date Field Display:** An issue where date fields in the calendar UI were not displaying correctly when empty was resolved. The fix ensures the interface properly shows empty fields and consistently passes dates in the correct format to the backend API. - **Pagination Listener Fix:** On the status page, a bug was fixed where the updated page number was not being listened to after a user change. This ensures smooth navigation when paging through data. Both of these fixes have been pushed to the production environment. ## Account Synchronization with Arcadia Significant progress was made on syncing internal account data with the external Arcadia system, which is key for automating statement retrieval. - **New Account Sync Feature:** A new synchronization function was created that, when provided with a client ID, fetches all corresponding account information from Arcadia, including the Arcadia account ID for each internal account. This sync must be manually triggered. - **Purpose of the Sync:** The primary goal is to establish a reliable link (mapping) between internal account numbers and Arcadia's unique account IDs. This stored mapping will allow for more efficient API calls to download statements, eliminating the need for multiple lookups. - **Future Automation Potential:** The discussion explored the possibility of automating this sync in the future. One proposed method is to update the account ID mapping automatically every time a response is received from Arcadia's API, which would keep the data continuously accurate. The value of also pulling metadata like "next expected post date" and "latest statement" from Arcadia was noted for future consideration. ## SharePoint Integration for Statement Management Work continued on integrating SharePoint to automate the storage and retrieval of client statements. - **Current Development Focus:** The immediate task is to complete the programmatic integration with SharePoint to facilitate the downloading of statements from Arcadia and their subsequent upload to the correct SharePoint locations. - **Planned Implementation Scope:** Initially, the plan is to start by downloading statements on a monthly basis, targeting the month of February as a starting point. - **Client-Specific Organization:** A key requirement is that the downloaded statements must be split and organized by customer or client within SharePoint. The newly established Arcadia account ID sync will be instrumental in achieving this, as it ensures each statement can be accurately associated with the correct internal client account.
DSS Date Fixes
## Overview of DSS App Date Handling Issues The meeting centered on addressing two critical problems related to date input and storage within the DSS application. The primary focus was on rectifying data integrity issues and improving the user experience by ensuring consistent and correct date field interactions. ## Core Date Storage Issue The application is currently storing user-entered dates incorrectly in the database, which poses a significant risk to data integrity and report accuracy. - **Incorrect date storage:** There is a mismatch between how dates are entered by users and how they are ultimately saved to the database. This discrepancy could lead to incorrect data in reports and analytics, undermining the reliability of the entire system. - **Clarification of expected behavior:** A reference screenshot was provided that outlines the correct, expected format and method for storing dates. The immediate technical priority is to audit the current storage logic and align it with this defined standard to prevent further data corruption. ## Inconsistent Date Input Methods A key usability flaw was identified where the application fails to provide a date picker for all fields that are technically designated as date fields, forcing manual text entry. - **Missing date pickers for date fields:** Several fields that are defined as date types within the application's schema do not trigger the dedicated date picker UI component when selected. This forces users to type dates manually, which is error-prone and leads to inconsistent data formats. - **Proposed solution for consistency:** The requirement is to intelligently detect when a user clicks into any cell that corresponds to a date-type field and automatically present the standard date picker component. This would standardize input, reduce errors, and improve the overall user experience. ## Technical Implementation Priorities The discussion framed the work as two distinct but related technical tasks that need to be addressed to resolve the overarching date handling problems. - **Investigation of storage logic:** The first task involves a deep dive into the backend or data layer code to identify why dates are being transformed or saved incorrectly between the user interface and the database. - **Enhancement of UI field detection:** The second task focuses on the frontend, requiring an update to the logic that renders input fields. The system must be modified to reliably identify all date-type fields and invoke the appropriate picker widget, eliminating the need for manual entry. ## Next Steps The immediate action item following the meeting is to commence a thorough investigation into both outlined issues, starting with the data storage discrepancy. The resolution of these problems is considered urgent to maintain data quality and application usability.
Credential Management Feedback
## **Summary** The meeting focused on providing critical feedback and feature requests for improving a credentials management and test environment system. The discussion centered on current functional limitations, interface clarity, and deployment priorities to enable immediate operational use. ### **Issues with the Current "Needs Action" Filter** The primary filtering mechanism intended to isolate accounts requiring attention is not functioning as expected, leading to an overwhelming and inefficient review process. - **Inaccurate Results:** Applying the "needs action" filter still displays a large number of accounts, suggesting the filter is not correctly isolating items with genuine discrepancies. This undermines the tool's core purpose of streamlining credential reviews. - **Conflicting Filter Logic:** Attempting to use multiple specific filters simultaneously (e.g., mismatching usernames, mismatching passwords) creates confusion. It is unclear if these filters are designed to be used in conjunction or independently, hindering precise searches. ### **Ambiguity in Discrepancy Resolution** The system is flagging accounts for resolution where no clear difference is visible to the user, reducing trust in the automated validation. - **Unclear Discrepancy Indicators:** Specific accounts are marked for resolution despite appearing identical in both username and password fields (e.g., `energy123` vs. `energy123!`). The root cause of the flag-potentially a difference in the associated URL or another hidden field-is not surfaced to the user, making resolution impossible without guesswork. ### **Suggested Interface and Workflow Improvements** Proposals were made to reorganize the interface for better focus and future functionality. - **Dedicated Validation Tab:** It was suggested to split the credentials management into a separate "credentials validation" tab. This dedicated space would provide a clear, running count of items needing action, giving teams an immediate high-level view of their workload. - **Future Account Management Feature:** A longer-term need was identified for the ability to archive or soft-delete accounts. This would be crucial for handling accounts that are found to be invalid or closed, providing a way to clean the system without permanent deletion, especially for accounts that only exist in one environment. ### **Critical Deployment and Production Issues** Urgent attention was demanded for system stability and making the new features available to end-users. - **Production Deployment Failure:** The credentials management page in the production environment is currently non-functional. Ensuring a stable deployment to production is the highest priority. - **Immediate Operational Need:** Even if all proposed improvements are not complete, the core validation functionality must be live in production to allow teams to begin reviewing client credentials the following day. The improvements made and documented (noted as "the copy is already in") must be pushed to the production environment without delay.
Automated Bill Pulls
## Current Task Status The immediate technical task of adding a credentials status column to an existing database table is nearly complete, requiring only a few more minutes of work. This sets the stage for the next phase of development. ## Automated Bill Retrieval and Storage System A new automated process needs to be built to pull the latest billing documents ("builds") from the Arcadia system on a daily basis. The core objective is to eliminate the manual step of logging into individual vendor portals. - **Establishing a Central Repository:** The files will no longer be stored locally but in a shared location. Options discussed include an FTP folder or a newly created SharePoint site, with the latter being readily available. - **Storage and Organization Logic:** The system must store files on a monthly basis. A highly desirable feature is to organize these files by customer, which would significantly streamline subsequent manual processing for the operations team. - **Execution Scope:** This automated retrieval and storage process is intended to be a persistent solution, continuing until a fully automated ingestion pipeline is built, acting as a critical interim efficiency improvement. ## File Naming and Organization Strategy A consistent naming convention for the retrieved bill files is essential for clarity and manual handling. The convention will incorporate key data points from Arcadia. - **Naming Convention Components:** File names should include the vendor name, account number, and a relevant date field. - **Date Field Hierarchy:** A clear priority for the date field was established: the **statement date** is the primary choice. If unavailable, the **due date** should be used. As a final fallback, the **service period end date** can be utilized. - **Folder Structure:** An organization scheme for the SharePoint was suggested, involving creating monthly folders (e.g., "February") and potentially sub-folders for each client within them for optimal order. ## DSS Application Enhancements Concurrent with the automated retrieval system, modifications are required in the DSS application's upload interface to accommodate the new workflow. - **Expanding Queue Options:** The upload function's dropdown menu needs additional queue selections. The incoming bills are all destined for the prepay queue, and adding this option explicitly will prevent misrouting to priority or on-demand queues. - **Task Management:** Behind the scenes, logic must ensure that tasks generated from these uploads are appropriately snoozed according to their new queue assignment. ## Client-Specific Data Handling A significant part of the discussion centered on whether and how to associate retrieved bills with specific clients automatically, which would add substantial value. - **Automated Client Identification:** The ideal workflow would automatically group bills by client without requiring a manual selection interface. The feasibility hinges on whether the new credentials status table (or another source) contains the necessary data to map an Arcadia account to a specific client. - **Value of the Feature:** Successfully implementing per-client organization would make the file repository intuitive for the operations team to navigate and would improve tracking and accountability for each client's documents. ## Queue Management Considerations There was a clarifying discussion on how the different queue types within the DSS application function, which informs the needed enhancements. - **Existing Queue Mechanics:** Currently, selecting "batch" uses an OpenAI batch queue with a 24-hour turnaround, while "on demand" uses an API for instant response. The prepay, postpaid, and backlog queues all internally utilize the OpenAI batch process. - **Handling Live Bill Volume:** A key consideration for future phases is managing the potentially high volume of uploaded "live" bills. The system must leverage the existing, robust queuing mechanisms appropriately to ensure stable processing, though specific implementation details are deferred until later in the project. ## Next Steps and Collaboration The plan forward involves parallel work streams and clear communication to maintain momentum. - **Task Documentation and Assignment:** The discussed requirements and naming conventions will be formalized within an updated EPIC (project tracking document), which will then be shared to guide development. - **Immediate Action:** Development of the automated bill retrieval service is the immediate next priority. - **Resource Coordination:** There is bandwidth available from another team member (Tina). Progress updates or requests for assistance can be communicated directly via the development team channel at the end of the day for efficient collaboration.
Arcadia Rollout Plan
## Summary The meeting centered on the urgent implementation of the Arcadia platform to solve critical bottlenecks in the invoice download and processing pipeline. The primary goal is to drastically reduce the time between an invoice being published online and its download, which currently averages 8 days, severely eating into standard payment cycles. Arcadia is seen as the necessary external solution to automate this process. The discussion outlined a phased implementation plan, immediate interim manual processes, and several technical hurdles that need to be resolved for a full, automated deployment. ### The Core Problem and Arcadia's Role The central issue is an unsustainable 8-day average delay in downloading invoices after publication, which consumes half of a typical 15-20 day payment window. This delay is the primary driver for implementing Arcadia, an external tool designed for automated, timely invoice retrieval. The initiative is critical for all bill-pay customers, as timely download is the first and most crucial step in the end-to-end payment process. ### Phase 1: Credential Cleanup and Initial Setup The immediate first step is to ensure all customer credentials for utility portals are accurate and up-to-date in both the internal system (UBM) and Arcadia. A cooperative student (co-op) has been assigned to clean the credential list, with an aim to close a roughly 10% gap in accuracy. The cleaned credentials will enable Arcadia to begin downloading invoices. The initial focus will be on a subset of customers for testing before a broader rollout. ### Phased Implementation Timeline A multi-phase rollout was proposed. The interim manual process begins immediately. The target is to have **all bill-pay customers onboarded to Arcadia for web downloads by mid-February**, with the remaining non-bill-pay customers added by the end of February. This timeline is contingent on successful credential cleanup and resolution of technical dependencies. ### Interim Manual Process and Folder Structure Until full automation is achieved, a manual interim process will be used. Downloaded invoices from Arcadia will be placed in a shared directory with a specific folder structure: - **Customer-specific folders:** Each client (e.g., "Constellation") will have a dedicated main folder. - **Date-based subfolders:** Within each customer folder, subfolders will be created based on the invoice date (e.g., "2025-01"). A cascading logic for determining this date was agreed upon: using the statement date first, then the due date, and finally the service period end date. - **Manual review & upload:** A designated person from the data services team will review the invoices in these folders, confirm the client, vendor, and account number, and then manually drag-and-drop the PDFs into the DSS (Data Services System) for processing. A "loaded" subfolder will be used to track processed invoices. ### Key Technical Hurdles and Automation Dependencies For long-term, scalable success, two major technical milestones must be achieved to move beyond the manual bottleneck: 1. **Building a managed pipeline and directory:** Creating an automated system to receive and track files from Arcadia, providing full visibility and control over the flow of invoice data. 2. **Injecting metadata into the process:** Developing a service to log critical metadata (client ID, vendor ID, account number) at the time of the API call to Arcadia. This will allow for automatic matching and identification of invoices upon download, eliminating the need for manual review. Options include using Arcadia's custom fields or unique IDs. ### Reporting and Success Metrics Establishing reporting capabilities from Arcadia data is essential for monitoring the initiative's success. Key metrics identified include: - The total number of invoices requested for download versus the number successfully retrieved. - Tracking which January (or current month) invoices have been pulled and identifying what is missing. - Overall coverage to understand how much of the total invoice volume (approximately 20,000 for bill-pay customers) is being handled by Arcadia versus requiring manual intervention.
Constellation Bill Retrieval
## Customer The customer is Constellation Energy, who operates in a dual capacity: both as a **vendor** providing energy supply and as a **client** using the platform for their own energy management and bill processing. They have a complex account structure involving thousands of service accounts and require integration for historical and ongoing bill retrieval. ## Success The most significant success demonstrated involves the effective use of the Arcadia integration for automated bill retrieval. For energy vendors *other than Constellation*, the process is working smoothly. Bills are successfully pulled in bulk through Arcadia's API, automatically organized into customer-specific folders, and delivered via a shared drive or FTP. This automation has effectively replaced a highly manual, bill-by-bill download process, proving the value of the platform's data aggregation capabilities for streamlining operations. ## Challenge The primary and most pressing challenge is the **inability to automatically download a backlog of approximately 3,300 historical invoices directly from Constellation Energy's own portal**. Attempts to use automated scripts or crawlers are being blocked by Constellation's network security. Furthermore, there is confusion and a lack of established process regarding account credential management. The team lacks the necessary centralized credentials (like a master login for the Constellation portal) to authorize tools like Arcadia to access these bills on behalf of the Constellation client entity. This is compounded by data alignment issues, as some accounts on the list may be "rate ready" and not have actual invoices available, leading to potential mismatches and wasted effort. ## Goals The customer's immediate and longer-term goals center on resolving the invoice backlog and establishing reliable, scalable processes: - To obtain the backlog of roughly 3,300 Constellation bills in the next 24-48 hours, evaluating options like Arcadia bulk pull, a dedicated file container, or a manual download as a last resort. - To establish a clear, documented, and responsible workflow for adding new Constellation customer accounts to the central credential portal (CEP) to prevent this issue for future bills. - To implement a sustainable, automated method for ongoing Constellation bill retrieval, preferably through an integration like Arcadia that can deliver files directly to a designated FTP location. - To clarify the scope of the backlog by reconciling the list of requested bills with actual enrolled accounts to avoid processing non-existent or out-of-scope invoices.
SharePoint and Appsmith
## **Summary** The meeting focused on several key operational and technical initiatives to streamline workflows and enhance data accessibility. Topics included coordinating support for a new team member, implementing a temporary storage solution for billing documents, managing firewall access for a data visualization tool, and improving error handling within an existing service. The discussions were solution-oriented, with clear next steps identified for each item. ### **Coordination for Credentials Sync Support** A new team member, Kanish, will require assistance with credentials synchronization processes. He will initiate contact directly, and all necessary support will be coordinated to address his questions, including scheduling calls across time zones. - **Direct support channel:** Kanish has been instructed to contact the relevant person directly via Slack for any questions regarding credentials sync, ensuring a straightforward line of communication. - **Time zone consideration:** With Kanish located in India, there is an approximate 3-4 hour time difference, which will be factored in when scheduling any potential calls for deeper discussions. ### **Implementation of a Temporary Storage Solution** A decision was made to utilize the existing Microsoft SharePoint infrastructure as a temporary storage repository for client bills, avoiding the need to build a custom solution. - **Leveraging existing licenses:** The company's Microsoft Business Premium licenses provide ample SharePoint storage (approx. 10GB per user), which is already available to all operators. - **Initial setup and access:** A dedicated SharePoint site named "Arcadia Files" has already been created. The site owners will verify operator access and are responsible for organizing the folder structure, which can be arranged by client for easier management. - **Ownership and permissions:** The necessary individuals are designated as owners of the SharePoint space, granting them full control to create folders and manage content as needed for the onboarding process. ### **Firewall Configuration for AppSmith Integration** A request was made to whitelist the IP addresses for AppSmith, a data visualization tool, to allow it to pull data from internal databases. - **Security and scope clarification:** The request requires specific details on which parts of the infrastructure (e.g., specific databases or services) need to communicate with AppSmith to ensure proper and secure firewall rules. - **Purpose of the tool:** AppSmith is intended to create editable, dynamic dashboards and reports, primarily for UBM data but with expanding use for DS data as well. - **Action item:** The relevant team will review the specific IP details posted in the dedicated communication channel and proceed with the whitelisting process on the DS side, mirroring what was previously done for UBM. ### **Enhancements to Output Service Validation** Work is planned to improve the output service by implementing a mechanism to handle validation failures appropriately, ensuring failed bills are routed for manual review. - **Current status:** A previous issue with a stored procedure in the output service has been resolved, and the service is currently functioning for single-bill processing. - **Proposed improvement:** The focus is on adding logic so that when a bill fails validation during output processing, it is automatically sent to a "data acquisition audit" workflow where operators can intervene. - **User experience consideration:** To aid operators, any bill kicked back to audit will include a note within the system explicitly stating the reason for the failure, providing immediate context for corrective action. - **Next steps:** The implementation will involve brainstorming the specific integration points within the existing DS state workflow and beginning development on this new error-handling path.
DSS Update Flow
## Summary The meeting focused on troubleshooting and clarifying the expected behavior of the DSS auto queue system, particularly regarding how user updates to pending items should be processed. ### DSS Auto Queue Process and Update Behavior The core discussion revolved around the automated workflow for items in the DSS auto queue. The key point clarified was that when a user submits an update to an item within this queue, the system should automatically progress that item to the next stage in its workflow. This confirmation resolved a point of prior confusion, indicating that manual intervention to move the item forward should not be necessary after an update is saved. - **Automated progression upon user update**: The system is designed to handle updates seamlessly, moving the item to the next stage automatically once the user submits their changes. This addresses a potential gap in understanding where a team might have been waiting for a separate manual step. ### Example of a System Inconsistency An immediate practical example was raised to test the confirmed understanding, pointing to a potential data or display issue within the system itself. A specific field in a record was noted to show an unexpected value ("1300"), which contradicted other available information. - **Addressing data discrepancies**: While the process for handling updates was clarified, the conversation highlighted an instance where the data presented to the user might be incorrect or inconsistent, suggesting a separate issue that could affect user trust and system reliability.
Invoice Ops Sync
## Customer The customer is an energy supplier or utility company, likely operating under the "Constellation" brand, which utilizes the platform for invoice processing and management. They appear to be a sophisticated user, engaged in a detailed operational workflow that involves pulling and reconciling utility bills on behalf of their own customers. Their background involves managing complex data integrations and ensuring accurate financial transactions through the system, indicating a mature and critical use of the service. ## Success A notable success highlighted is the customer's proactive and diligent oversight of their account security and user management. Specifically, the customer (referenced as BPG) identified unfamiliar "Cognizant" usernames with external email addresses on their account and actively questioned who these users were. This level of scrutiny is described as exceptional, demonstrating that the customer is thoroughly engaged with the platform's user audit features and is vigilant about access control. This incident validates the value of the platform's transparency but also underscores that this particular customer operates with a high standard of internal governance. ## Challenge The primary challenge revolves around significant data integrity and system integration issues. There is a persistent problem with customer accounts and invoices not being correctly set up or reflected in the system. For instance, some accounts were not properly registered in the core database despite being in the upstream system, leading to invoices not processing. Furthermore, there is considerable confusion and inconsistency around critical data fields, such as "total charges" versus "total amount due," which impacts accurate financial reporting and reconciliation. An additional, recurring pain point is the visibility of internal support or implementation team members within the customer's user list in the platform, which causes confusion and concern for the customer when they see unknown individuals with access. ## Goals - Achieving seamless and error-free invoice processing and data acquisition from utilities. - Establishing clear and accurate data mapping, particularly for financial fields like total payment amounts, to ensure reliable reporting. - Improving the user management experience within the platform, ideally by masking or clearly designating internal support personnel to prevent customer confusion. - Gaining better visibility into the processing lifecycle beyond SLA metrics, such as tracking against invoice due dates, to manage internal workflow more effectively. - Cleaning up legacy data and tickets (e.g., items in "OPS review") to maintain a clear and actionable system record.
تأكيد المدة
## الملخص النص المُقدم للتحليل قصير جدًا ولا يحتوي على أي محتوى جوهري أو نقاشات فعلية حول أي موضوع. يتكون النص بشكل كامل تقريبًا من تكرار عبارة "ممتاز 45 دقيقة" دون أي سياق أو تفاصيل توضح سبب هذا الوصف أو ما هي الفكرة أو المشروع الذي يتم مناقشته. بالتالي، لا يمكن استخلاص أي نقاط نقاش مهمة، ولا توجد مواضيع ذات معنى لتقسيمها إلى أقسام فرعية. الملخص للاجتماع، بناءً على النص المقدم، هو أنه لم يتم مناقشة أي موضوع عملي أو إداري، ولم يتم تبادل أي معلومات ذات قيمة يمكن تلخيصها.
شراكة وتنظيم العمل
## **العميل المحتمل** العميل هو مؤسس لديه خبرة في مجال التصميم والخدمات الإبداعية المعززة بتقنيات الذكاء الاصطناعي. يتمتع بخلفية تقنية تمكنه من استخدام أدوات متقدمة مثل "جيميني" (Gemini) لتوليد الصور والفيديوهات، بالإضافة إلى الاعتماد على منصات مثل "فيجما" (Figma) في عمليات التصميم. يتبنى نهجاً ريادياً واضحاً، حيث يصر على البحث عن شريك أعمال (Co-founder) يمكنه المشاركة في المسؤوليات والبناء، وليس مجرد موظف. يعمل حالياً بشكل فردي إلى حد كبير، ويشرف شخصياً على التواصل مع العملاء الحاليين وإدارة سير العمل. ## **الشركة** تدور أعمال الشركة حول تقديم خدمات تصميم متخصصة، مع التركيز على استخدام قوالب (Templates) مخصصة ونماذج ذكاء اصطناعي لتحسين الجودة والكفاءة. يبلغ إيرادها الشهري المتكرر (MRR) حوالي 9000 دولار أمريكي، وتخدم حالياً أربعة عملاء، بعضهم مشتركون بخطط سنوية. تستخدم الشركة مجموعة من الأدوات التقنية مثل الحوسبة السحابية (Cloud) ومنصات التواصل المباشر (مثل واتساب) لإدارة المشاريع والتواصل. تواجه الشركة مرحلة نمو تتطلب تطوير العمليات الداخلية لأتمتة المهام وزيادة القدرة الاستيعابية لاستقطاب المزيد من العملاء. ## **الأولويات** - **تحقيق قابلية التوسع (Scalability):** تُعد الأولوية القصوى، حيث أن قدرة المؤسس الحالية على استيعاب عملاء جدد محدودة بسبب انشغاله الكامل بإدارة العملاء الحاليين. هناك حاجة ملحة لأتمتة العمليات الداخلية وبناء أنظمة تسمح بإضافة عملاء بدون تضييق على الوقت والجودة. - **هيكلة تواصل فعّال مع العملاء:** يعتمد التواصل الحالي بشكل كبير على القنوات المباشرة وغير الرسمية مثل واتساب، مما قد لا يكون مستداماً مع نمو عدد العملاء. هناك تركيز على البحث عن أدوات أو إجراءات أكثر تنظيماً لمراجعة التصاميم وتبديل الملاحظات مع العملاء بطريقة مركزة وفعّالة. - **الموازنة بين العمل عن بُعد والتواصل الشخصي:** يؤمن العميل بأهمية التفاعل المباشر وبناء العلاقات الشخصية، حتى في إطار عمل رقمي. يناقش أهمية تبني نموذج عمل هجين (Hybrid) يتضمن لقاءات دورية مكثفة لفريق العمل لتعزيز الروح الجماعية والابتكار المشترك، مع الإبقاء على مرونة العمل من أي مكان. - **تأمين التمويل والنمو المستدام:** هناك اعتراف بالحاجة إلى رأس مال (Capital) لتمكين النمو، سواءً لتمويل التطوير التقني أو لتوفير رواتب ثابتة عند إحضار شركاء أو موظفين. يتم النظر في نموذج شراكة عادلة يأخذ في الاعتبار مستوى الالتزام والوقت المخصص من قبل الشريك المحتمل.
DS AI Meeting Prep
Navigator Alignment
## Background The candidate has a strong product management background, often acting as the "glue" between engineering, product input, and operations to drive product strategy. Their career path includes consulting, followed by a role at the Finmark OIC startup, and most recently, a significant tenure at Build.com. At Build.com, they played a key role in leading the product vision and foundational work, particularly during and after acquisitions. ## Work Experience The candidate's experience is characterized by tackling complex integration and platform challenges. Their most relevant tactical experience comes from their time at Build.com following the Finmark acquisition. They led the effort to create a unified platform by merging features from three distinct systems: the existing Build platform, the Finmark product, and another acquired company, Devi. This involved defining the vision for "Build 2.0," which ultimately became the foundation for the current Build.com platform. Their approach to such large-scale, cross-functional collaboration is methodical and highlights several key insights: - **Ownership and Alignment:** Success hinges on assigning clear product ownership for each application component and ensuring those owners' direct managers are fully aligned on goals and timelines. - **Managing Dependencies:** A primary challenge is securing the right resources, especially from teams outside one's direct control, due to inevitable external dependencies. - **Technical Vision:** It is crucial to have the right technical leadership involved from the start to make decisions that support a long-term, sustainable vision rather than just a quick fix. Currently, at Constellation, the candidate is applying this experience to untangle systemic issues. They identify two core problems: a "house of cards" architecture where solutions are built in isolation and become brittle, and a cultural "mindset shift" needed to challenge long-held operational assumptions that may be contributing to technical debt. Their work focuses on deliberate refactoring of broken, siloed components to create a modular, scalable foundation, starting with critical data services. ## Candidate Expectations ### Working Style The candidate expects and thrives in a highly collaborative environment. They emphasize the importance of breaking down silos, not just technically but interpersonally. They believe in empowering team members to communicate directly with the right people across engineering and operations without unnecessary gatekeeping or bureaucratic hurdles. Their ideal work mode involves being the connective tissue between strategy and execution. ### Career Goals Professionally, they are focused on moving the organization from a short-term, fire-fighting mentality to a strategic, product-first, and platform-oriented approach. They aim to establish scalable systems and processes that will support long-term growth, specifically by transforming the platform to reduce manual operational overhead and custom one-off solutions. ### Specific Needs To be effective, the candidate needs direct access to knowledge and the authority to question existing processes. They note a current struggle where they are not always equipped with the necessary information, and when they consult subject matter experts, underlying assumptions are often not challenged. They seek an environment where "everything is on the table" for reevaluation to solve root causes, not just symptoms. ## Questions/Concerns The candidate's questions reflect a deep engagement with the company's strategic challenges. They sought validation on their diagnosis of the core issues, asking if the panel agreed on the major problem areas: chronic data acquisition timeliness, persistent data inaccuracies, and poor integration between key applications (Data Services, UBM, Carbon Accounting, Glimpse). Furthermore, they implicitly raised concerns about the company's ability to transition from a "services-first" and custom-reporting culture to a true product-led, scalable SaaS model, given the existing commitments and business structure.
Plan for Arcadia
## Summary The meeting focused on resolving several ongoing technical challenges and planning a new workflow to streamline data operations. Discussions centered on system integrations, infrastructure access, and developing a temporary solution for managing critical documents. ### Power BI Account and Pipeline Deployment A solution has been developed for a previously stalled Power BI integration. While Microsoft support was unhelpful, an internal resource successfully rewrote the necessary components. The revised code is now ready but not yet in production. The next step involves running a specific deployment pipeline to activate the service in the live environment, which is described as a safe, intermediate-step process designed to avoid breaking existing systems. The status check and restart pipeline must be executed in sequence to ensure a correct deployment. ### FTP Server Access and File Retrieval There was clarification on how to access and retrieve files from an internal FTP server. Access is tied to the corporate domain VPN, and users should already have the necessary credentials. The discussion covered practical methods for connection, recommending either using a Remote Desktop Protocol (RDP) session into a virtual machine or employing an FTP client like FileZilla. A specific issue was raised about potential security software on Windows machines blocking file execution, with the suggested workaround being to copy files directly from the VM via RDP rather than trying to install or run them locally. ### Proposed Arcadia PDF Management Solution A significant portion of the meeting was dedicated to planning a new, temporary workflow to handle utility bills from Arcadia, aiming to drastically reduce manual work for the Data Operations (DS Ops) team. **The Proposed Workflow** The plan involves automatically downloading PDF invoices from the Arcadia provider API as soon as they are available. These files, named with the relevant customer account number, would then be placed in a shared folder. DS Ops personnel would access this folder, identify the correct PDFs, and manually upload them into their processing system (Web Download Helper or DSS), bypassing the need to log into individual utility portals. **Critical Infrastructure and Access Considerations** The primary unresolved question is identifying the appropriate shared storage location that all relevant team members can access securely. Key constraints and considerations were identified: - **Access Requirements:** The solution must be accessible to all DS Ops team members, including the Mexico-based team, who may or may not have VPN access to the core infrastructure. - **Security Concerns:** Providing full VPN access to the production infrastructure to all operators was flagged as a potential security risk and should be avoided. - **Platform Options:** Several platforms were discussed as potential hosts for the shared folder, including a SharePoint folder (likely available with existing business email accounts), an Azure storage blob with a SharePoint front-end, or a third-party service. The core requirement is a simple, shared space accessible with existing company credentials, without requiring additional VPN configurations. - **Next Steps:** The action is to research and define the simplest, most secure shared storage solution that meets the access requirements, with findings to be discussed in a follow-up meeting. ### Pending Tasks and Follow-ups A separate issue regarding report analysis was noted as pending, awaiting a whitelist approval that is expected to be resolved on Monday. The overall direction for the Arcadia project will be solidified with more detailed requirements and a drafted plan in the immediate future.
Arcadia Integration Plan
## Architectural Integration and Scalability The meeting centered on the urgent need to establish a cohesive architectural framework for integrating Arcadia across the organization's applications, such as Navigator, DSS, and UVM. The current piecemeal approach, relying on manual processes to connect with Arcadia and download customer invoices, is unsustainable and creates conflicts, especially with shared clients. A long-term, scalable solution is required to manage customer IDs, send uniform notifications, and maintain a consistent data structure. - **Establishing a Unified Data Architecture:** The core proposal is to develop a shared Navigator database or user base. This system would pull data from Arcadia once and distribute it intelligently across all relevant applications, ensuring all teams operate from a single source of truth and can communicate effectively to avoid future conflicts and duplication of effort. ## Immediate Operational Goals with Arcadia The primary immediate objective is to leverage Arcadia to solve the most pressing operational bottleneck: the time it takes to download and process customer invoices, which currently averages around seven days. The strategy is to implement a phased approach, starting with a manual but faster process, while building the foundations for full automation in parallel. - **Phase 1: Manual PDF Download and Distribution:** The first milestone is to use Arcadia simply as a fast download source. Invoices will be pulled and placed into pre-defined, customer-specific folders on a shared directory. A manual cleanup process for this folder will be established (e.g., data purged every two weeks) to manage the influx. This step alone is expected to dramatically reduce the initial 7-day delay. - **Phase 2: Laying the Groundwork for Automation:** Concurrently, the team will work on the technical prerequisites for automation. This includes implementing clean credential logging to track Arcadia API calls and errors, and establishing a reliable webhook system from Arcadia to receive real-time notifications when new bills are available, which is critical for eliminating guesswork and assumptions in the billing cycle. - **Parallel Operational Streams:** It was emphasized that work on non-Arcadia-related processes (like template builds and ICDB cleanups) must continue. The goal is to fold Arcadia into the organization's optimized workflow rather than creating a separate, permanent "Arcadia work stream." ## Credential Management and Data Mapping A significant immediate blocker is the inconsistency in customer credentials stored across different systems (DSS and UVM). Cleaning and unifying these credentials is a prerequisite for successful and accurate automated data pulls from Arcadia. - **Credential Cleanup Process:** A dedicated operational resource is needed to review and correct credential mismatches. A new tool has been developed to highlight discrepancies-such as usernames/passwords that differ between systems or accounts that exist in one system but not the other. For a sample customer, mismatches numbered in the dozens, indicating a manageable but critical cleanup task. - **Data and Vendor Mapping:** Beyond credentials, there is a need to resolve vendor name mismatches between internal records and Arcadia's listings (e.g., "First Energy" might be listed under a different name in Arcadia). A separate list of approximately 30 such vendors requires review and mapping to ensure complete coverage. ## Team Structure and Resource Requirements To execute the plan, specific roles and resources were identified as necessary for both immediate tasks and long-term architectural success. - **Immediate Operational Support:** Two key roles are needed urgently: - A credentials specialist to systematically clean up the credential mismatches using the new tool. - An operations team (potentially leveraging an existing overseas team) to manage the manual process of downloading invoices from the new Arcadia-sourced folders and handling them through the existing workflow. - **Long-term Architectural Support:** The team identified a gap in solution architecture expertise. A dedicated solution architect is required to design the sustainable, high-level data flow and integration patterns that will allow Arcadia data to be used seamlessly across the entire Navigator platform (UBM, Carbon Accounting, etc.), ensuring the solution is built correctly for future scale. ## Preparation for Leadership Presentation The conversation shifted to preparing for an upcoming presentation to key business leaders, including Jim, Dan, and George. The goal is to showcase the platform's capabilities and its strategic role in feeding multiple business units. - **Presentation Structure and Key Messages:** The presentation will start by positioning the platform as the central "engine" that feeds critical services like Corporate Sustainability, Navigator Rebates, and retail commodity offerings. It must highlight collaboration with other teams and demonstrate how the platform adds value across the organization. - **Content Guidance and Cautions:** Specific advice was given to tailor the message for the audience: emphasize the retail/SaaS application first, as it aligns directly with sales strategy, and be prepared to discuss applicability to gas customers. A caution was raised to avoid mentioning a specific reference customer (Core Weave) due to its recent public financial difficulties.
DSS Daily Status
## Status Updates and Current Work Focus A round of status updates was given, primarily focused on the ongoing development and refinement of internal tools and reports. The key areas of work involve the credential management system, the data synchronization between UBM and DS tables, and resolving discrepancies in the late-arriving bills report. ## Credential Management System Development Significant progress was made on the credential management interface, which has been deployed to the test and development servers. The updates enable comprehensive handling of credential data through a dedicated management page. - **Enhanced UI and Functionality**: The logic and user interface for creating, editing, and deleting credentials have been updated. The system now displays columns from a new consolidated table, including mandatory fields like mark, source, and timestamps. - **Discrepancy Resolution Workflow**: Credentials can now be resolved directly from the validation page, after which they appear on the management page for further edit or delete operations. - **Dynamic Form Fields**: The "create credential" modal on the management page now includes dropdowns for vendor, client, and account, allowing for the creation of new credentials for specific combinations. **Data Synchronization Strategy** A shift in the data-fetching approach was discussed to optimize performance. Instead of dynamically pulling from both source systems (UBM and DS) in real-time, a new method is being implemented where all records are fetched and stored in a single table within DS. A sync mechanism will update this consolidated table whenever a new credential is added or an existing one is modified in the source systems. A crucial future requirement noted was the need for **bidirectional sync**: any updates made in the new management system must also be propagated back to the original UBM and DS tables, as those are still actively used by other teams. ## Review and Next Steps for Credential Management A live review of the test environment was conducted to align on the current state of the feature. It was confirmed that the system correctly displays discrepancies (e.g., records existing in only one system or having different passwords) and allows users to select a source of truth to resolve them. Feedback was noted regarding making the URL a required field and the description optional. A detailed list of items, including the bidirectional sync requirement, will be formalized and shared to guide the next phase of development. ## Deployment and Production Updates Several features have been successfully promoted to the production environment following verification in test. The **bill upload data** feature and the **editable template** functionality within the activity history are now live. Furthermore, work has commenced on a separate project cycle related to FDG Connect, which is distinct from the main DSS application. ## Investigation into Late-Arriving Bills Report A critical issue was raised regarding a mismatch between the "Late Arriving Bills" report and the master accounts list. The report appears to show fewer late bills than expected, prompting a need for investigation. - **Identifying the Discrepancy**: The current report only includes accounts that have already received at least one bill, while the master list contains all accounts. The logic for identifying a "late" bill is based on whether the current date has passed the projected invoice date for that account. - **Scope and Logic Analysis**: A key finding was that the late bills report only considers two bill source types (Web and Mail), whereas the master account list tracks four types. It was emphasized that all bill sources must be tracked for accountability, as the team is responsible for timely payment regardless of how the bill is expected to arrive. - **Plan for Resolution**: The immediate action is to add an "All" filter to the late bills report for broader visibility. The core task is to investigate why applying the same "late bill" logic to the full master accounts list yields a different count than the current specialized report. This may involve creating a new filtered view of all accounts to compare data directly. The team will also examine the specific conditions other teams are using to manually generate their own late reports to ensure alignment.
Arcadia Integration Plan
## **Summary** The meeting focused on planning the technical integration with Arcadia, a utility data platform, to automate the retrieval of customer invoices (PDFs) and manage the associated credentials. The discussion centered on understanding current processes, defining new automated services, and solving key data mapping challenges between multiple internal systems (DSS and UBM) and Arcadia's API. ### **Project Overview and Goals** The primary objective is to automate two core processes currently handled manually: sending customer credentials to Arcadia and pulling historical invoice PDFs from their platform. This automation is crucial for scaling operations and ensuring data consistency across systems. - **Automating Credential Management:** The current manual process of uploading credentials via CSV to Arcadia's dashboard is not scalable. The goal is to build a service that can programmatically send and, more importantly, monitor the status of these credentials (e.g., "connection success," "MFA required," "invalid credential"). - **Automating Invoice Retrieval:** The manual method of calling the Arcadia API to fetch PDFs needs to be replaced with a reliable service. This service should be capable of pulling invoices for a specific client over a defined date range, which will support both ongoing operations and the onboarding of new clients who require historical data. ### **Automating Credential Status Monitoring** A critical first step is gaining visibility into which client accounts in Arcadia are active and which have failed. This involves creating a new service to synchronize Arcadia's credential status with internal records. - **Initial Implementation Plan:** The first technical task is to write a script that will iterate through all account numbers for a specific client (starting with Ascension). For each account, it will call the Arcadia API's credentials endpoint to retrieve the current connection status. - **Data Storage Strategy:** The information fetched from Arcadia (status, failure details) will be stored in a new database table. This table will link the internal account record with its corresponding Arcadia account ID and status, providing a single source of truth for credential health. This is essential for the Data Operations team to review and take corrective action on failed accounts. - **Open Questions:** A significant unresolved question is whether these API calls for checking status will incur costs, as the enterprise pricing agreement with Arcadia is not fully detailed. The decision was to proceed cautiously with the first client to understand the implications. ### **Building the Historical Invoice Pull Service** Beyond checking credentials, a dedicated service is needed to actually retrieve the invoice documents from Arcadia. This service must be flexible to handle both one-time historical pulls and future automated updates. - **API Workflow Clarified:** The process for fetching a PDF involves two API calls. First, a search with the account number yields a unique Arcadia account ID. Second, using that ID, a call to the statements endpoint retrieves a list of available invoices, each with its own statement ID. Finally, the statement ID is used to download the actual PDF. - **Service Design:** The envisioned service will accept parameters like client, vendor, account number, and a date range. It will then execute the API workflow to pull all corresponding PDFs within that period. This design will support onboarding new clients (by pulling their full history) and supplementing data for existing clients. - **Learning from Past Mistakes:** It was noted that previous manual pulls fetched *all* available data without filters, which was inefficient. The new service must include proper query parameters to fetch only the necessary data. ### **Data Mapping and Integration Challenges** A major architectural challenge discussed was ensuring the accurate mapping of invoices from Arcadia to the correct records in the internal DSS and UBM systems, given that all three systems have different identifiers. - **The Unique Identifier Problem:** While the Arcadia API can be queried with just an account number, there is concern that this may not be a unique enough key, especially if a single credential provides access to multiple sub-accounts. The unique `account id` returned by Arcadia in the initial API response was identified as the crucial linking field. - **Creating a Mapping Layer:** The solution is to create a mapping table or add columns to an existing table (like the client account table) that stores the relationship between the internal client/vendor/account composite key and the unique Arcadia account ID. This mapping is vital to ensure every pulled PDF is correctly associated in both DSS and UBM. ### **Technical Implementation & Next Steps** The conversation concluded with concrete plans for initial development and broader technical alignment. - **Immediate Action Items:** The immediate focus is on the Ascension client. The first step is to build the script and table for credential status synchronization. Following that, development will begin on the historical pull service for PDFs. - **Broader Technical Context:** The credential automation is part of a larger "credential validation" epic being worked on separately, which aims to resolve discrepancies between credential storage in UBM, DSS, and external sources like Smartsheet. - **Environment Setup:** A separate goal was established to provide local development environment access for the codebases (DSS API, DSS UI, IPS API) hosted in Azure DevOps, to enable future work and experimentation. ### **Ancillary Technical Topics** Brief time was spent addressing two unrelated but urgent technical issues. - **Power BI Dashboard Issue:** An investigation is needed into why a Power BI report is not displaying data beyond January 13th, with a suspicion that a backend query or connection may have failed. - **DSS Build List Filter:** A separate UI issue was identified where the date filter in the DSS "all bills" view seems to be hard-coded or malfunctioning, preventing users from seeing bills after January 13th. This requires a front-end or query fix.
Output Service Overview
## Overview of Billing and Output Service Issues This meeting focused on critical operational challenges within the billing data pipeline, primarily concerning the Output Service and its interaction with upstream systems like Data Services (DSS) and Data Audit. The core problems identified included incorrect data propagation, systemic bottlenecks causing processing delays, and a lack of cohesive validation across the workflow. The discussion centered on diagnosing the root causes and formulating a strategic plan to enhance data integrity and system reliability. ## Resolving Incorrectly Loaded Bill Data **The immediate priority was addressing bills that had been imported into incorrect client-vendor accounts within the data systems.** This misallocation risked creating "gap bills" for customers, where a processed but wrong bill would prevent the correct one from being pulled, disrupting financial reporting. - **Proposed Solution and Process:** A plan was established to completely delete the affected document batches from the Data Services System (DSS). A technical lead confirmed the availability of a targeted delete query for document batches, which would effectively wipe all associated data and allow the system to repopulate the correct download tasks. - **Coordination:** Action was delegated to provide the specific bill or batch IDs via a dedicated team channel so the deletions could be executed precisely, cleaning the data slate before further processing. ## Systemic Bottlenecks in the Output Service **The Output Service was identified as a major bottleneck, causing significant delays in bill delivery to customers.** This was exacerbated by a shift from batch processing to individual bill processing, which unintentionally amplified the impact of validation errors. - **Processing Delays and Business Impact:** The service was not processing bills quickly enough, leading to a growing backlog in the "Ready to Send" queue. This delay was particularly critical for priority bill-pay customers (e.g., prepay and UEM service levels), where on-time payment file generation is contractually essential. Manual intervention to push bills was frequent but inconsistent. - **Root Cause Analysis - Validation Failures:** The primary driver of the backlog was a high volume of bills failing the Output Service's validation checks-estimated at 800 bills. These validations were originally intended as a rare, final check but are now catching frequent errors. Key failure points include missing or invalid invoice dates and incomplete client service item setups. - **Amplification by Architectural Change:** Moving to per-bill processing meant that every bill with a validation error required its own data fetch. This caused the system to waste cycles repeatedly checking and failing the same invalid bills, drastically slowing overall throughput. The absence of a separate quarantine queue for failed items meant these bills remained in the active processing loop. ## Critical Gaps in Data Validation Workflow **A fundamental disconnect was uncovered between validation steps in DSS/Data Acquisition and the final Output Service check, allowing erroneous data to flow to the last stage.** This gap is largely due to increased use of "audit skip" functionalities. - **Source of Invalid Data:** A significant portion of problematic bills originates from DSS or from processes where audit steps are skipped. Unlike data entered through the traditional Data Acquisition (DDAS) system-which has robust, integrated validation-these bypassed streams lack the same rigorous automated checks. The result is that invalid data is only caught at the final output stage. - **Human Audit Bypass:** Due to past resource constraints, a policy was enacted to skip human data audit for many bills, relying on the assumption that automated systems were accurate. This has led to downstream errors, including incorrectly labeled client bills (e.g., Arcadia badges on wrong accounts), which the UBM team was unprepared to handle. - **The Client Service Setup Dilemma:** A specific, persistent validation error relates to client service configuration (e.g., charge-only lines with incorrect units). This is an account-level issue, not a bill-level one, and is not validated during data entry or account setup. Fixing these requires editing the client's master service items, not re-auditing individual bills. ## Strategic Solutions and Proposed Workflow Changes **The consensus was to implement upstream validation parity and intelligent failure handling to resolve the core issues.** The goal is to catch errors earlier and prevent them from clogging the final output stage. - **Implement Unified Pre-Audit Validation:** The primary strategy is to align the validation logic in DSS and Data Acquisition with the checks performed by the Output Service. Bills failing these checks in DSS would be routed directly to Data Audit for human review *before* ever reaching the "Ready to Send" queue. This should prevent ~80% of current output failures. - **Redesign Failure Management:** For bills that still fail at the output stage, a new process is needed: - Bills with data errors (e.g., bad dates) should be pulled back to Data Audit for correction. - Bills failing due to **client service setup** issues should *not* go to audit, as the bill itself is fine. Instead, an alert should trigger the account management team to fix the master client service items. - **Optimize Output Service Performance:** Short-term tactical fixes include manually pulling all bills stuck in "Postpay" or "History" back to audit to clear the processing queue. This will allow the Output Service to focus only on valid, ready bills. A **priority queue** for prepay/UEM customers is also planned for implementation to ensure critical deadlines are met. - **Enhanced Monitoring and Debugging:** Additional detailed logging will be added to the Output Service to better track the lifecycle of each bill and diagnose why manual pushes sometimes fail or delay. This will help identify and fix elusive issues, such as system queries incorrectly returning zero bills for a batch.
Plan for Arcadia
## Summary The meeting centered on resolving critical operational issues, planning strategic integrations, and addressing upcoming compliance deadlines. A major data ingestion problem was analyzed, solutions for system safeguards were discussed, and significant upcoming projects were reviewed with attention to resource allocation and timelines. ### Arcadia Data Ingestion Incident and Immediate Fixes An urgent issue was addressed concerning incorrect data from the Arcadia P7 source being ingested into the wrong client accounts, requiring both immediate correction and the creation of a preventative system safeguard. - **Nature of the Problem:** Instead of creating new accounts, incorrect PDF bills (e.g., for "Sheets") were mapped to and ingested into existing accounts for different clients (e.g., "Victra"). This occurred because the file names matched the wrong client accounts during the upload process by the Data Operations team. - **Immediate Action:** The corrective workflow involves marking all affected bills for deletion and resetting the download status. This ensures that if a bill is legitimately due, it will reappear as a task for manual download, preventing missed bills. - **Long-term Technical Safeguard:** A new validation step will be added at the bill processing stage. The system will check if the account number and other key identifiers on the incoming bill PDF match the account it is being uploaded to. Any mismatch will halt the process and route the bill to a manual validation queue for review before proceeding, preventing similar errors in the future. ### Credential Management and Arcadia Integration Planning Progress on a credential management system was confirmed, and comprehensive planning for the permanent Arcadia integration, referred to as the "Arcadia Dashboard," was outlined. - **Credential Management System:** This project is on track, with a push to the testing environment expected imminently and a target production release early the following week. - **Arcadia Integration PRD:** A detailed Product Requirements Document (PRD) has been drafted to formalize the long-term solution. This document covers the API integration, metadata capture, data flow, and cloud infrastructure. A technical review is scheduled to validate the approach, identify dependencies, and break down the work into parallel or sequential tasks. - **Strategic Data Tracking:** A key driver for the new dashboard is the need for comprehensive, client-agnostic file tracking. The current lack of visibility creates challenges when clients dispute whether files they've sent have been processed. The new system aims to provide an audit trail from the original source file all the way to the final processed bill, answering these questions definitively. ### Q3 Project Portfolio and Resource Constraints The quarter's project load was reviewed, highlighting significant overlap between major initiatives and raising concerns about developer bandwidth. - **Key Q3 Deliverables:** The quarter includes three major concurrent efforts: the **Azure cloud migration**, the annual **SOC 1 Type 2 audit renewal**, and the first-time implementation of a **SOC 2 Type 2 certification**. - **Bandwidth Concerns:** The Azure migration is expected to heavily occupy the primary DevOps engineer. Simultaneously, the two SOC audits require substantial evidence gathering and process documentation from the development and infrastructure teams, creating a potential resource conflict. - **Proposed Mitigation:** To address the compliance workload, the idea of hiring or assigning a dedicated, tech-conversant project manager was discussed. This person would coordinate evidence collection, manage vendor (Eden) communications, and handle ticket management, thereby reducing the burden on the core engineering leads. ### SOC 2 Certification and Client Commitments The initiation of the SOC 2 Type 2 certification process was confirmed, driven by specific client demands and upcoming contracts. - **Project Initiation:** Work has begun on the first-ever SOC 2 Type 2 certification for the organization. An internal expert who has previously managed such projects will lead the evidence management effort. - **Business Imperative:** The project is critical for business development, as at least one potential client (L'Oréal) is preparing to sign a contract contingent on achieving SOC 2 compliance within the calendar year. Another existing client also requires it. ### Communication and Operational Strategy Actions were defined to improve internal communication and set clear expectations for the team regarding operational changes. - **Streamlining Communication:** To prevent context loss from numerous direct messages, all technical discussions related to data systems development will be centralized in a dedicated Slack channel. This ensures broader awareness and more efficient information sharing within the team. - **Managing Team Expectations:** A leadership-facing view of the Arcadia integration's impact will be prepared. The goal is to transparently show the team that while Arcadia will resolve a large percentage of manual work, new categories of tasks (managing non-covered vendors, handling MFA/credential issues, and addressing the remaining "delta") will emerge. This clarity is intended to prevent surprise and ensure proper resource planning for the new operational model.
SIMON - INTERNAL UBM Decision Point
## **Project Background and Current Challenges** The meeting centered on the Simon project, a client contract signed in July, where onboarding only began in mid-November after the client expressed frustration over delays. The core challenge is defining and developing critical financial reports required for go-live, a process complicated by the recent passing of the client's key knowledge holder. New report requirements continue to surface unexpectedly, and the client's team lacks a clear inventory of who uses which reports, forcing the project team to reverse-engineer specifications by interviewing users. ## **Analysis of Go-Live Readiness and Risks** The group thoroughly assessed the feasibility of the current April 1st go-live date and unanimously agreed it is unattainable and risky. - **Unrealistic Timeline:** Even with complete specifications today, development is estimated at 45 days, not accounting for subsequent client testing and approval cycles. A key dependency is the client's 60-day notice period to their current vendor (NG), which would have needed to start immediately to hit an April 1st date. - **Significant Financial and Reputational Risk:** Going live with incomplete or incorrect reports would cause major financial and SOX compliance issues for the client. The consensus was that this would ultimately damage the relationship, as the provider would be blamed for the failure, regardless of the root cause being unclear client requirements. - **Consensus for a New Date:** The team agreed that a **June 1st go-live date** is a realistic and necessary target. This provides a clear timeframe for the client to manage their vendor notice and allows both teams a concrete goal. ## **Deep Dive into Report Requirements and Status** A major portion of the discussion involved scrutinizing the specific reports needed, their status, and the obstacles in finalizing them. - **Expanding and Unclear Specifications:** What was originally thought to be three reports has now grown to at least seven. The client's risk-averse culture leads them to request "all the data" without a clear understanding of what is essential, complicating development scope. - **Current Completion Estimates:** - **Payment Files (CSV & Text versions):** These are approximately 80% defined, though a crucial GL mapping from the client is still pending and could alter the build. - **Anaplan Feed Files (Bill Info & Vendor Info):** These are new requirements. While similar to existing reports, they require new GL allocation logic, making them more complex than simple column matching. - **GL Allocation Report:** A new requirement similar to an account-side report; specifications are still undefined. - **Accrual (CT) Report:** A complex, detailed report used for year-end reconciliation. The team strongly advocates for this to be a post-go-live deliverable, as it is not a day-one necessity, but the client currently insists on its inclusion. - **Process Anomalies:** The team discovered the client's data contains consolidated line items (e.g., water/sewer combined) and a large "other charges" bucket without clear mapping. Resolving how to handle these-or if the client even wants them changed-requires senior client direction. ## **Dependencies and Logistical Hurdles** Beyond report development, several other critical path items were identified that support the need for a delay. - **LOA and Account Onboarding:** The Letter of Authorization (LOA) for EDI accounts cannot be pursued until the client gives notice to NG. Without the LOA, portal setup and credential changes for thousands of accounts cannot begin, a process estimated to take at least 30 days. - **User Management:** A process for user provisioning and de-provisioning via a single sign-on (SSO) feed is still an outstanding development item that needs to be scoped and built. - **Extended Testing Phase:** Given the large number of stakeholders and the client's risk-averse, detail-oriented nature, the User Acceptance Testing (UAT) phase is anticipated to be lengthy and must be factored into the timeline. ## **Proposed Strategies and Path Forward** The team discussed proactive strategies to build client confidence and demonstrate progress while working toward the new June 1st date. - **Phased Data Rollout for Familiarization:** A proposal was made to provide the client with live platform access using a subset of their real data (e.g., Constellation accounts) well before the go-live date. This would allow users to explore the system, generate questions early, and become comfortable with the new platform, potentially reducing last-minute confusion. - *Technical Note:* This would involve manually running processes to generate sample AP files, as the system wouldn't process them automatically in a "history" state. - **Financial Recognition for Work:** The idea of discussing partial billing for the onboarding and report development work completed to date was floated, similar to a past approach with another client, to recognize the effort invested. - **Communication Plan:** The team agreed that the rationale for the June 1st date and the suggestion of a phased data rollout should be communicated directly to the primary client stakeholder (Drew) in a focused conversation, not on a large group call, to manage the message effectively.
UBM Project Group
## **Introductions and Project Scope Clarification** The meeting began with introductions of team members and a new participant shadowing the project for learning purposes. The core discussion clarified the nature of the project work, emphasizing that it involves improving multiple distinct business processes for efficiency and automation, rather than optimizing a single, end-to-end workflow. A key administrative point was addressed regarding a requested process map; it was clarified that the request was actually for a project timeline update to show completed and in-progress work for leadership reporting, not a detailed process document. ## **Review and Confirmation of Completed Multi-Month Initiatives** Several initiatives marked for completion in previous months were reviewed and formally confirmed as finished by the team. This ensures the project plan accurately reflects progress. - **Establish New Credentials:** This task was confirmed as complete, with a clear distinction made that it was separate from the ongoing, related "UBM Credential Management" work. - **Customer Issues (Phase 1 & 2):** Both phases of addressing customer issues were validated as completed, closing out this significant workstream. - **Update Payee Files:** The update to payee files was also confirmed as done, finalizing another scheduled deliverable. ## **Status Update on First Quarter (January) Deliverables** The bulk of the meeting focused on reviewing tasks slated for completion in January, with many items carrying over from December. The team assessed whether deadlines were still realistic, given the current date. - **General Timeline Assessment:** Most tasks were confirmed to be on track for their January or February deadlines as originally planned. - **Dependencies on External Partners:** Several specific items were identified as being dependent on external teams (Paikylady and Sign) for data, field definitions, or discrepancy resolutions. The completion of these items is pending feedback or action from these partners. - **Check Monitoring & Payment Data Reconciliation:** Active work is underway, but a final solution depends on an external team's response regarding a specific requirement. This item may slip into February. - **Electronic Payment Conversion & SLA Monitoring:** These are tied to reconciling payment data timing between systems. An analysis of discrepancies is in progress, with a call scheduled to align data refresh cycles and investigate missing records. - **Overall Q1 Outlook:** While there are external dependencies, the team plans to reassign completion dates by the following week. If external feedback is received imminently, tasks may stay in January; if delays are longer, they will be formally moved to February. ## **Planning and Discussion for February Initiatives** The discussion looked ahead to February's workload, which includes several multi-month initiatives concluding and new potential work items. - **Upcoming Major Deliverables:** Key items ending in February include the Data Validation System Status, UBM Credential Management, and the foundational work for the Arcadia integration. - **Evaluation of Potential Work Items:** Two items, "MFA Phones and Downloads," were highlighted for a strategic decision. Their necessity is contingent on the volume of accounts remaining in-house post-Arcadia integration, which is projected to be a small minority (max ~12%). The team agreed to analyze the specific subset of accounts affected before allocating resources, as the work involves implementing a workaround using personal phones. - **Resource Allocation:** The general sentiment was that the February plan is achievable, but final commitment to the MFA-related tasks is pending further data analysis. ## **Forward Look into March and Project Closing** The preview of March shows a lighter scheduled load, allowing for potential spillover from previous months. - **March Deliverables:** The primary items are the Integrity Checks (which may become unnecessary) and the Bank Poll feature. The Bank Poll work is noted as being primarily driven by an external partner with minimal internal effort required. - **External Partner Coordination:** For the Bank Poll, the team suggested initiating early conversations with the external partner to prepare for the upcoming work. - **Meeting Conclusion:** The session concluded with a reiteration of the shadowing participant's role and a plan to provide the discussed project timeline update to leadership.
Report
## **Summary** ### **Understanding the GL Allocation Report and Mapping** The meeting began with a technical discussion of a General Ledger (GL) allocation report, which is used for SOX compliance to monitor changes in GL account coding. A core challenge was identified: the service type descriptions (like "Other Services" or "Miscellaneous Charges") provided by the current vendor are too generic and do not reliably map to specific GL codes. For instance, the "Other Services" category could contain deposits or other one-off charges, all posting to the same GL, while "Miscellaneous Charges" could post to two different GLs despite the identical description. - **The key resolution** was the agreement that a complete and precise mapping is required. Instead of the vague service types, the team will provide a filtered chart of accounts containing *only* the GL codes relevant to utility billing. This mapping will explicitly define what specific charge on a utility bill (e.g., deposit, water, sewer, late fee, repair) corresponds to each unique GL account number. This definitive mapping is a prerequisite for configuring the new platform correctly. ### **Methodology for Allocating Charges to GL Codes** The conversation detailed how charges on a bill are split across multiple GL accounts. For commodity charges like water or electric, allocations are based on a fixed percentage split tied to a specific utility account. These splits can differ by commodity on the same bill (e.g., electric and gas on one bill can have different allocation percentages). - **Hard-Coded Allocations:** Certain charge types, such as late fees and deposits, are consistently allocated 100% to a single, predefined GL account. This logic would be hard-coded in the new system. - **Miscellaneous & Tax Charges:** For non-standard or tax charges, the allocation defaults to mirroring the percentage split of the main commodity charges on that bill. - **Change Management:** These GL allocation splits are typically reviewed and potentially updated on an annual basis, though a new monthly billing process may lead to more frequent changes, which is under internal review. ### **Handling Complex "Summary" Utility Bills** A significant portion of the meeting was dedicated to addressing "summary" bills, where one utility account encompasses multiple properties or service addresses. The analysis covered two complex scenarios: a single-property bill with multiple commodities (electric, water, sewer) and a multi-property bill where different service addresses on one invoice correspond to different legal entities. - **Proposed Technical Solution:** The platform can handle these by treating each unique service address or commodity combination as an individual line item ("build block"). Each block can then have its own specific GL allocation applied. - **Data Field Consideration:** A key decision point was whether to use the original, overarching utility account number from the bill or to create a composite identifier. The tentative agreement is to use the original account number, with the understanding that a crosswalk will be provided to financial reporting to reconcile this with existing records on day one. ### **Payment File (TXT) Specification and Build** The team reviewed the technical specifications for the payment file that will be sent to the JD Edwards (JDE) system. The discussion confirmed and clarified several formatting rules for the fixed-width text file. - **Field Formatting:** Specifics included leaving certain character ranges blank, using a Julian date format, and left-padding the business unit field with spaces (not zeros) to ensure it is a 12-character string. - **GL Account Format:** The GL account will be passed as a single attribute string (e.g., `0115.501530.0000`) and then parsed into its constituent parts (business unit, object account, subsidiary) within the file generation logic. - **Vendor Name & Charge Scope:** The vendor name field will be populated with a customized "pretty name," truncated to 30 characters. Importantly, it was confirmed that the payment file should only include charges for the *current* service period; any past due amounts paid would create a discrepancy and require a separate process. - **Data Aggregation Logic:** For the billing dates on aggregated lines (where multiple charges roll up to one GL line), logic will need to handle potentially different service periods, though they are typically the same. ### **Prerequisites and Next Steps** The meeting concluded by identifying critical path items required to finalize the build. The single most important prerequisite is receiving the final, filtered GL code mapping from the accounting team. This mapping will unlock the configuration of allocation rules and the generation of the required SOX compliance report. The clarifications on summary bill handling and TXT file specifications have defined the development path forward, pending that key data input.
Jay <> Faisal
## Summary The discussion primarily focused on two major operational challenges: compliance requirements for SOC (System and Organization Controls) and significant project hurdles related to the "Simon" client implementation. Concerns were raised regarding a potential resource crunch, with key personnel being stretched across three concurrent major initiatives: an Azure migration (with an August target), a SOC 1 Type 2 renewal (due in September), and a new SOC 2 Type 1 audit. The approval and integration of the "Vanta" tool for compliance automation was also flagged as a lengthy bureaucratic process within the larger organization (Constellation). The bandwidth for managing extensive manual policy work alongside technical projects was a primary worry. The bulk of the conversation centered on the Simon project, where deep uncertainty about the timeline and feasibility was expressed. While initially aiming for a late March go-live, continuous discovery has revealed more complexity than anticipated. The core issue is replicating legacy system ("NG") functionality, specifically around generating custom financial reports and data files (like AP payment files). A critical blocker is the lack of clarity on GL (General Ledger) allocation requirements at a detailed charge-type level, which the current platform does not natively support. This fundamental mismatch could require heavy customization. The process is hindered by fragmented client knowledge, inefficient requirement-gathering meetings, and the absence of a single, empowered decision-maker on the client side. A key internal meeting was scheduled to reassess the Simon go-live date and strategize on communicating challenges externally. ## Wins - A critical blocker for the Simon project (GL allocation logic) was clearly identified as the central issue requiring immediate resolution, bringing focus to the core complexity. - Potential leverage points for pushing back on client demands were identified, such as challenging the necessity of certain year-end reports by April and potentially delaying non-essential reports. - Progress was made in understanding the true scope of customization needs for clients, distinguishing between essential AP file integrations and less common, fully customized reports. ## Issues - **Severe Resource Constraints:** A major bandwidth conflict exists for the core team between the Azure migration, SOC 1 Type 2 renewal, and the new SOC 2 audit. The Azure migration is likely to take priority, putting the SOC compliance timelines at risk. - **SOC Compliance Complexity:** There is confusion over audit scope (SOC 1 vs. SOC 2) and concern over the manual effort required for renewals. The approval process for the "Vanta" automation tool within the larger corporate structure is expected to be slow and could delay evidence collection. - **Simon Project Timeline at Risk:** The late-March go-live date is considered highly unrealistic. Requirement gathering is still incomplete and inefficient, with new reports (now five essential ones) still being discovered. - **Fundamental Product-Requirement Mismatch:** The client's need for GL allocations at the charge-type level may not be supported by the current product architecture, implying significant customization or a difficult conversation about scope. - **Inefficient Client Engagement:** Meetings with the Simon team are described as unproductive, with too many participants, no clear ownership, fragmented knowledge, and a resistance to recording, leading to repetitive discussions. - **Upcoming Absence:** A key team member will be absent for a full week, further slowing progress on the already-delayed Simon requirements phase. ## Commitments - **Faisal Alahmadi** to double-check the approval status of the "Vanta" tool within the larger organization (Constellation). - **Faisal Alahmadi** to pull and provide a list of vendors for the compliance team to review, aiding the SOC 2 process. - The team to use the scheduled internal meeting to formally reassess the Simon go-live date and develop a strategy for external communication regarding delays and challenges. - The team to raise the fundamental issue of GL allocation requirements during the internal meeting to determine a product and project strategy. - The team to revisit and discuss resource allocation and prioritization for the three major initiatives (Azure, SOC 1, SOC 2) in the following week.
[EXTERNAL]Invitation: UBM Weekly Touchpoint @ Weekly from 9:30am to 10am on Thursday (EST) (faisal.alahmadi@constellation.com)
## **Policies for SOC2 Compliance** The meeting began with a core discussion on sourcing the necessary policies for the SOC2 compliance framework. It was determined that existing policies are not housed in a dedicated "policies" section within Confluence; instead, relevant information is dispersed across various pages, requiring compilation. A strategic decision was made to use Constellation's existing policy set as a foundational template, with adaptations for the organization's current operational state, rather than building all policies entirely from scratch. - **Using Constellation's policies as a baseline:** This approach was chosen to ensure alignment with future integration plans and to avoid potential gaps or conflicts. The policies will be reviewed and adjusted to fit the current environment, particularly for areas like acceptable use and asset management. - **Addressing policy gaps:** It was noted that a data management policy covering data retention specifics may need to be created or augmented, as this is a SOC2 requirement not fully covered in the existing Constellation documentation. - **Flexibility for future changes:** The plan allows for policies to be updated later when the organization fully integrates into the Constellation infrastructure and onboarding processes, ensuring a smooth transition. ## **Timeline and Observation Period** The target timeline for the SOC2 Type 2 audit was a major point of discussion, directly influencing the policy strategy. The goal is to establish a quarterly observation period in Q3 (July to September), followed by the audit. - **Targeting a Q3 observation window:** Initiating the observation in Q3 sets the stage for the subsequent audit and allows the compliance framework to be tested during that period. - **Implications of the migration timeline:** A significant consideration is the planned migration from GCP to Azure. Since full integration with Constellation's Azure environment is expected post-Q3, the initial compliance efforts will focus on the current GCP setup, with policies tailored to present operations. - **Phased policy alignment:** The policy work will proceed based on the current operational model, with the understanding that they will be updated or replaced once the organization is operating within the Constellation Azure environment and adhering to their standard processes. ## **Access Control and Device Management** A detailed conversation addressed the practical challenges of access control and device management, especially concerning contractors and the current hybrid infrastructure setup. - **Contractor device compliance:** A potential challenge identified involves contractors, particularly those from Cognizant, using their own company-managed devices. The concern revolves around ensuring these devices meet security standards for password policies, screen lock, and antivirus, which may be outside direct control. - **Onboarding process verification:** It was confirmed that while contractors may not use Constellation-issued machines, there is a documented onboarding process via Cognizant and internal access approval tracking through Jira for application and tool access. This documented, approval-based process is considered sufficient for SOC2 controls. - **Current infrastructure scope:** The team clarified that most daily work is conducted outside the Constellation VPC, primarily within GCP, and does not currently leverage Constellation's endpoint detection and monitoring services like CrowdStrike Falcon. ## **Integrations and Automated Testing** The team explored the technical implementation of compliance monitoring through the Vanta platform, focusing on integrations for automated testing and evidence collection. - **Initial integration with GCP:** The immediate action is to connect the GCP environment to Vanta. This enables automated testing of configurations against SOC2 requirements, streamlining evidence collection compared to manual screenshots and documentation. - **Scoping integration for production:** The integration can be scoped specifically to the production environment within GCP, excluding test or development projects, which keeps the SOC2 audit focused on relevant systems. - **Future challenge with Azure integration:** A foreseen hurdle is integrating Vanta with the future Azure environment, as it will require approval from Constellation's infrastructure and cybersecurity teams due to their strict bureaucratic controls. A workaround using manual evidence collection is planned if automated integration is not permitted. - **Admin access for the team:** Key personnel were granted administrator access within Vanta to configure integrations, manage scopes, and address any failing automated tests. ## **Vendor Management and Risk Assessment** The final substantive topic covered the process for managing third-party vendor risk as part of the SOC2 compliance framework. - **Compiling the vendor inventory:** The first step is for the internal team to add all third-party vendors used in the service delivery (e.g., SendGrid for email) into the Vanta platform. - **Focus on high-risk vendor reviews:** Subsequent analysis will categorize vendors based on risk. Comprehensive security reviews will be prioritized for high-risk vendors, specifically those that have access to sensitive organizational data or pose a significant security threat. - **Clarification on data flow:** It was noted that many current vendors are service providers where data flows primarily *from* the vendor to the organization, not the other way around, which may influence their initial risk categorization.
Plan for Arcadia
## **Review of Current Development Tasks and Status** The meeting began with a systematic review of multiple development tickets to verify their completion status and deployment environment. This was done to clean up the task backlog and ensure alignment on what was ready for production versus what remained in testing. - **Operators Nodes and Soft Vendor Sorting:** Several features were confirmed as completed and deployed to the production environment. This includes the "FDD Connect Operators" nodes, the ability to sort vendors alphabetically by invoice count, and the display of the user who created a record. - **Pagination Fix and Build Template Features:** A bug related to pagination resetting when switching between vendors was confirmed as fixed. Updates to the build template, such as adding an "Updated On" field and linking invoice templates to specific build records, were reviewed. These features are currently available in the test environment but are **awaiting deployment to production**. - **BDE/Third-Party Flag and Activity History:** A flag to mark vendors enrolled with a third-party service (initially BDE, to be expanded for Arcadia) was implemented and is live. The Activity History feature, which tracks changes made to records, was demonstrated in the test environment and is also ready for production deployment pending final verification. ## **Credential Management System Development** A significant portion of the meeting was dedicated to reviewing the progress on a new Credential Management system, which is crucial for ensuring data consistency and preparing for third-party integrations. - **Core Validation and Management Pages:** Development is nearly complete on two key interfaces. The **Validation Page** identifies credential mismatches between systems (e.g., differing passwords) or credentials that exist in only one system, requiring manual review. The **Management Page** will automatically house credentials that perfectly match across systems. - **User Workflow and Filtering:** The user can resolve discrepancies on the Validation Page by selecting which system (UVM or DSS) should be considered the "source of truth." The interface includes client filters, allowing operators to focus on specific customers during cleanup. A proposal was made to add an "archive" action for obsolete accounts. - **Next Steps and Automation Logic:** The remaining work involves deploying the current changes to a test environment and refining the underlying automation logic. The system is designed to continuously compare credential tables and flag new discrepancies as they arise, preventing bulk mismatches in the future. The "matched records" filter on the Validation Page will be removed, as matching credentials will automatically move to the Management Page. ## **Cleanup of Problematic Builds and Data Integrity** A priority task involves cleaning up billing data that was incorrectly processed or is missing crucial client information, which is essential to prevent corrupted data from propagating to downstream systems. - **Identifying and Isolating Bad Data:** The focus is on scanning for and marking entire bills or individual line items that have missing or incorrect Client IDs. These problematic records are being flagged for deletion. - **Operational Review and Communication:** A detailed Excel report is being generated that lists all affected client accounts and bill IDs. This report will be shared with the operations team for their review and final confirmation before any deletion occurs, ensuring no accurate data is accidentally removed. ## **Planning for Arcadia Integration and Foundation** The team discussed the strategic approach for integrating with a new third-party service, Arcadia, balancing immediate client needs with building a sustainable, long-term solution. - **Immediate "One-Time" Effort for Ascension:** For the initial client (Ascension), a short-term, manual process will be established. This involves periodically calling Arcadia's API to pull statement PDFs and then ingesting them into the system. The goal is to service this client quickly while the foundational infrastructure is built. - **Designing the Long-Term Foundation:** The sustainable solution involves setting up a **webhook listener** to receive notifications from Arcadia when new statements are available. The service would then automatically fetch the PDF, extract the client account number, and match it to existing internal client-vendor data before ingestion. - **Key Technical Challenge - Client Matching:** A central discussion point was the complexity of reliably matching incoming Arcadia statements to the correct internal client and vendor combination. Relying solely on the account number in the PDF was deemed potentially too simplistic. A more robust matching logic, possibly leveraging the new credential management tables, is required to ensure accuracy and avoid future issues. ## **Prioritization and Forward-Looking Roadmap** The conversation concluded with setting clear priorities and defining the next steps for each team member to move the key initiatives forward effectively. - **Finalizing Credential Management:** The immediate next step is to deploy the credential validation and management pages to a test environment for final review and testing. - **Technical Design for Arcadia:** Following the completion of the data cleanup, a dedicated technical assessment and design document for the Arcadia integration foundation will be created. This will break down the required tasks, such as building the webhook listener, the client-vendor matching service, and the automated ingestion pipeline. - **Task Ownership and Epic Alignment:** Work will be organized into clear epics. The credential management work will support the broader Arcadia integration. The reports feature and specific Arcadia-related development tasks will be assigned to the appropriate team members for parallel progress.
DSS Daily Status
## Summary The meeting focused on diagnosing significant data processing errors, outlining immediate corrective actions, and planning architectural improvements to prevent future issues. A central theme was the critical breakdown in the process of matching and ingesting customer billing data from a vendor (Arcadia) into the internal system (DSS), which led to incorrect client-vendor mappings being pushed forward. ### Root Cause Analysis of Data Errors The core of the discussion involved dissecting how incorrect data was processed and identifying three distinct categories of errors. These issues stemmed from a manual, multi-step process vulnerable to miscommunication and a lack of automated validation. - **Incorrect Client-Vendor Matching:** This was the primary issue, where the system or process incorrectly associated a bill from one vendor with a different client. This occurred despite having a known set of credentials, suggesting a flaw in the matching logic or its manual execution. - **Missing Client IDs:** A subset of records was processed with completely blank client identification fields. The cause was unclear, but it pointed to a failure in the initial data extraction or naming convention, allowing files without proper identifiers to enter the workflow without being flagged. - **Multi-Account Bill Complications:** Some files contained data for multiple customer accounts bundled together, which the existing process was not designed to handle correctly. While this was a large batch, it was noted that these errors were caught before payment processing. ### Strategic Process Improvements and Automation To address these failures, the conversation shifted to designing a more robust, automated system. The goal is to move away from error-prone manual steps and implement controls that ensure data integrity from the point of collection. - **Enforcing Automated Validation:** The future state requires that the Client ID and Vendor ID pairing be automated and mandatory based on the credential set. The system must reject or flag any file where this pairing is missing or doesn't align with the credential permissions, preventing incorrect matches from the start. - **Implementing Comprehensive Logging:** A new logging table is needed to create a complete audit trail. This would track every API call to Arcadia, link the downloaded files (PDFs and JSON) to specific credentials and triggers, and record their storage path. This transparency would drastically reduce the time needed to diagnose problems. - **Centralizing and Hosting Key Components:** The current locally-run scripts must be moved to a cloud-hosted environment. Alongside this, a centralized, approved cloud storage directory (e.g., on Azure) is required for all downloaded files before DSS ingestion, replacing the unreliable manual upload process. ### Team Capabilities and Architectural Leadership A significant portion of the meeting was dedicated to evaluating the team's capacity to design and implement these complex systemic changes, revealing a strategic gap in senior technical leadership. - **Identifying a Solutions Architecture Gap:** While the current development team is competent on implementation, there is a recognized lack of a dedicated resource to own high-level technical architecture. This role is needed to holistically understand the product, evaluate different solution paths (like migrating functions from DDS to DSS), and make strategic design decisions without deep immersion in day-to-day coding. - **Exploring Resources to Fill the Gap:** Options were discussed to bring in an external solutions architect, possibly from an existing partner like Cognizant or Canvascraft. The priority is finding someone who can quickly understand the business context, take ownership of major technical milestones, and guide the team through the upcoming wave of changes focused on DSS optimization and migration projects. ### Technical Infrastructure and Access Practical next steps were outlined concerning the technical environment needed to support the new process and enable more efficient work. - **Selecting Approved Cloud Storage:** Due to organizational cybersecurity policies, the storage solution for the PDFs must be on an already-approved platform, specifically legacy GCP or the organization's Azure instance. Azure was highlighted as the preferred contender due to existing integrations and the ability for the team to configure it quickly. - **Enabling Development Access:** A secondary need was identified for a Windows-based virtual machine to allow for direct editing and troubleshooting of certain reports and tools that require a Windows environment, which would improve operational efficiency for key personnel.
UBM Errors Triage
## Customer The customer operates in the energy or utility management sector, utilizing the platform for processing and validating a high volume of invoices and meter data. Their role involves managing complex billing data across multiple systems and ensuring accuracy for various client accounts. The background indicates a transition from legacy processes to a more automated platform, though this shift has involved integrating historically inconsistent or "janky" data, creating a unique set of challenges for this specific client compared to others. ## Success The most significant achievement has been the establishment of an automated build validation system. This dedicated framework allows for the continuous monitoring and flagging of data discrepancies. It represents a foundational step toward systematic data cleanup and provides a structured method to identify errors, such as invoices posted to incorrect client accounts or unit of measure issues, which were previously more difficult to isolate at scale. ## Challenge The primary and most persistent challenge is the misapplication of invoice data to the wrong client accounts within the system. This core data integrity issue leads to cascading problems, including incorrect payments and significant manual rework. The root cause appears to be a disconnect between source systems pulling correct account numbers and the platform applying them incorrectly, indicating a need for a more robust matching or flagging mechanism within the workflow. This issue is compounded when processing large, automated data drops from integrated systems, which can introduce hundreds of errors before they are caught. ## Goals Key objectives for the customer moving forward include: - Achieving complete and accurate data synchronization to eliminate invoices being placed on the wrong accounts. - Successfully cleaning up historical data for specific, complex clients as a pilot, with the expectation that subsequent client data migrations will be smoother. - Implementing a long-term, scalable solution that reduces reliance on manual band-aid fixes and constant firefighting. - Gaining clearer, more reliable reporting and visibility into processing statuses and backlogs to better manage operations and customer expectations. - Developing a controlled, efficient method to reprocess or correct batches of invoices when systemic errors are identified, without enabling uncontrolled system access.
[EXTERNAL]UBM Demo and planning
## Summary The meeting focused on several ongoing technical challenges, primarily centered around data tracking issues in payment file generation, and provided updates on current development sprints. Discussions covered immediate workarounds, future architectural solutions, and progress on automated testing infrastructure. ### Payment File Generation and Data Tracking Issues A significant data gap exists in matching generated payment files to specific bills, which complicates reconciliation and user downloads. - **Current System Limitation:** The core issue is that the system does not store the actual file names at the individual bill level in the database. Currently, data is aggregated and presented at the day level. This makes it impossible to definitively know which bills were included in a specific AP (Accounts Payable) or MUC file, especially when multiple files are generated or regenerated on the same day. - **User Experience Impact:** Because files cannot be matched precisely, the current workaround is to present users with *all* files generated for a customer on a given day when they click the download button. Users must then manually inspect these files to find the correct one, which is inefficient. - **Future Solution with Payment Refactoring:** A comprehensive fix is tied to the ongoing payment refactoring project. Once deployed, the new architecture will store the file name and batch ID at the bill level, enabling precise grouping. This will allow the system to display and offer for download a single, correct file corresponding to each batch. ### Current Development Work and Sprint Updates Team members provided brief overviews of their recent and ongoing development work across various functional areas. - **Virtual Account and Bulk Features:** Work was completed on fixing bugs related to virtual account functionality and implementing the backend processing for bulk operations on feature accounts, complementing previously built front-end download components. - **Summary Items and Refactoring:** Progress was made on refactoring summary items logic to include data based on customer preferences, such as pasture mode and pass-through amounts. This work was decoupled from a larger effort to expedite its delivery. - **Payments Area and Reporting Support:** Efforts in the payments area included fixing bugs related to the ongoing refactoring and implementing support for Excel files required for specific reports (Simon reports). These issues have been resolved. - **Upcoming AP File Logic Update:** A separate, urgent update to the AP file generation logic for a specific client (Jcode) was discussed. This update, which involves adding a new column and modifying line items, is **not dependent** on the broader payment refactoring release and can be deployed independently on its own timeline. ### Automated Testing Infrastructure Setup Foundational work has been completed to integrate automated testing into the development pipeline, establishing a base for future improvements. - **Initial Implementation:** Automated test suites have been set up to run on a scheduled basis in the demo environment. While some tests are currently failing-which was anticipated-the infrastructure for execution and reporting (including screenshots and videos for failures) is now in place. - **CI/CD Integration:** A key milestone is the integration of a trigger that allows these automated tests to run directly when a comment is posted on a Pull Request (PR). This marks a significant first step toward continuous integration, with clear pathways for future enhancement, such as triggering tests on PR creation or merge. ### Bug Fix for User Data Retrieval A specific performance bug was identified and resolved, relating to a third-party service limitation. - **Root Cause and Solution:** The "Users" page would error when loading 100 users at once because the underlying service (Auth0) has a hard limit of retrieving 50 user records per request. The fix involved implementing pagination on the front-end query, splitting the request into two sequential calls of 50 users each to work within the external constraint.
SOC 2 Readiness — Compliance Review
SOC 2 readiness review ahead of the March 15 Type II audit. Identified access control gaps and logging coverage issues. Clear ownership assigned — Alex on infrastructure remediation, Priya on documentation. On track if remediation completes by Feb 15.
Report
## Summary The meeting centered on critical dependencies and timeline challenges for a client implementation project. Key discussions revolved around the execution of a Letter of Authorization (LOA), the confirmation of report specifications, and the cascading impact these have on the overall project go-live date. ### Letter of Authorization (LOA) Execution and Dependencies The execution and delivery of the LOA to the client is a major bottleneck, with a hard internal deadline identified. - **LOA Status and Deadline:** The LOA is ready for final sign-off but must be sent to the client by **February 5th** to trigger a 60-day setup period for EDI bills. However, a conflicting internal requirement insists that all "day one" reports must be fully tested and ready before the LOA is sent, creating a significant scheduling conflict. - **Client Readiness as a Constraint:** There is strong skepticism about sending the LOA on time because the client is internally disorganized. They have not provided complete specifications and have introduced new, previously uncommunicated requirements, making the team hesitant to start the formal EDI setup clock. ### Report Requirements and Scope Confirmation Major delays are expected due to incomplete and expanding client requirements for the custom reports needed for go-live. - **Newly Discovered Requirements:** During a recent call, the client introduced two major new requirements: the need for reports to integrate with **Anaplan** (in addition to the known JD Edwards system) and a completely new **"Vendor Details" report** that had never been formally communicated, though a related file was discovered in a shared drive from months prior. - **Scope Confirmation Timeline:** The team is far from confirming the full scope. While the AP file report is relatively close to finalization, the numerous outstanding questions and new reports mean scope confirmation is realistically **at least 30 days away**. The team lacks sufficient information to provide reliable timelines for the custom reports. ### Project Timeline and Go-Late Impact The interdependencies between the LOA, report specifications, and development work will substantially delay the project's completion. - **Dependency Chain:** The project timeline is driven by a critical path where **report scope must be confirmed** before the **LOA can be sent**, which must then be followed by the **60-day EDI setup period**. Delays in confirming scope directly push out the go-live date. - **Unrealistic Client Expectations:** The client's expectation to have all reports, including one not needed until late 2026, completed for "day one" was challenged as illogical and a primary contributor to the timeline extension. The team is aligned on pushing back on this demand. ### Strategic Adjustments and Planning The discussion focused on adapting the project plan to reflect the new reality of extended timelines and unclear requirements. - **Restructuring the Plan:** It was suggested that the "Scope Confirmed" milestone be broken down **per report** rather than being a single project milestone, accurately reflecting the piecemeal nature of receiving client requirements. - **Re-baseliding Expectations:** The team acknowledged the need to provide the client with a new, later go-live date, as the current date is unachievable. All tentative dates for custom reports are considered placeholders until firm specifications are received from the client.
UBM Q1 Roadmap
Navigator platform sync to align on Q1 roadmap. Key decision to prioritize offline mode over widgets in v2.4. QA automation push targets 90% coverage. Cross-team dependency mapping created for Arcadia integration timeline.
Review Mapping Location Logic
## Summary This technical meeting focused on clarifying how line items from utility bills are processed and mapped within a billing platform's payment and accounting systems. The discussion centered on understanding the granularity required for payment files and the corresponding general ledger (GL) account mapping. ### Understanding Bill Composition and "Bill Blocks" The analysis of an example water and sewer bill revealed that individual charges are represented internally as distinct "bill blocks." A key question emerged about whether related charges for a single commodity should be consolidated or kept separate in downstream files. While a water bill showed separate blocks for water and sewer charges, an electric bill example demonstrated that a single commodity (electricity) could still be split into multiple bill blocks, such as for distribution and supply portions. ### Determining the Payment File Structure A primary goal was to resolve how these bill blocks translate into rows within the payment file sent to financial systems. Based on initial review, the indication is that each line item or bill block typically results in a separate row in the payment file, rather than rows being consolidated by commodity. This is further evidenced by the handling of ancillary charges like late fees and miscellaneous charges, which also appear as their own discrete rows. ### General Ledger Mapping Complexities The conversation highlighted that the need for separate payment file rows is often driven by accounting requirements. A single customer bill may need to be distributed across multiple General Ledger (GL) accounts (e.g., separate accounts for late fees, deposits, taxes, or different service components). This GL-driven breakdown is a critical factor determining the final structure of the payment data, as each unique GL code generally necessitates its own line item. ### Clarifying Account Codes vs. GL Codes During the discussion, a distinction was made between different code types present in the data. It was clarified that "account" codes visible in the bill file likely refer to customer or service accounts, not to GL accounts. The crucial mapping for accounting purposes is between the type of charge (e.g., "late charge," "deposit") and its corresponding GL account code, which is maintained in a separate mapping table or logic. ### Requirements for Accurate Data Mapping To ensure correct financial posting, a clear and detailed mapping logic is required. This logic must define the specific GL account code to be used for every possible type of charge or bill block (commodity, late fee, deposit, tax, etc.). Establishing this mapping is essential for automating the generation of accurate payment files that correctly reflect how payments should be allocated within the general ledger.
DSS All Hands
## Summary ### Introduction to the DSS Audit Queue Feature A new feature called the DSS Audit Queue has been implemented within the DSS application to address a critical gap in the automated billing process. The primary purpose is to intercept bills generated by the Large Language Model (LLM) that are missing essential information deemed necessary for processing. Instead of allowing these builds to fail completely at the output service level, they are now routed to a dedicated queue for manual review by a DS operator, ensuring problematic builds are caught and corrected before finalization. ### Filtering and Monitoring Builds in the Queue The feature introduces a dedicated "Build Validation" screen that intelligently filters and displays builds requiring attention. The system isolates builds based on a specific "DSS AutoQ" status and a "Process" build status, indicating they are pending review and have not yet been processed by the DDIS system. This provides operators with a real-time, centralized view of all builds awaiting manual intervention, complete with a counter on the application's side panel for immediate awareness of backlog volume, such as the nearly 890 builds noted during the demonstration. ### The Manual Review and Update Process Operators can review each queued build and manually correct the identified errors. The interface is designed for efficiency, automatically highlighting the specific row within a build that contains an error to direct the reviewer's attention. Corrections are made directly within this interface, and changes are saved automatically upon update. For additional context, reviewers can access the original PDF document associated with the build. Once a build is corrected and closed, it is removed from the queue and can proceed through the system. ### Common Error Examples and Resolution Workflow The demonstration highlighted several common error types that land builds in the audit queue, providing insight into typical failure points. A frequent issue is the system's failure to identify the **client ID**, which is a fundamental piece of information required for processing. Another example shown was an **invalid service item description**, where the extracted text (e.g., "Generation off peak UCP off") did not match expected values or formats. The resolution involves the operator reviewing the highlighted error, determining the correct value (such as updating a "Thermal" unit of measure to the proper term), and inputting it to allow the build to proceed. ### Integration Notes and Feedback Solicitation While the DSS Audit Queue streamlines validation within the new system, there are important limitations regarding integration with the legacy DDIS system. Any data filled out during the manual review in the DSS app **will not automatically carry over** to the legacy system; if further action is required there, it must be done manually. The feature includes a notes section for internal commentary. The development team is actively seeking feedback on the new workflow to identify any unforeseen friction points and improve the operator experience.
Review Problem Bills
## **Summary** The meeting primarily focused on diagnosing and resolving critical issues within the billing output service, specifically concerning a large backlog of bills stuck in a "ready to send" state and a problematic retry loop for failed batches. A key theme was the need to balance immediate fixes to clear the backlog with planning for a more sustainable long-term solution. Effective communication and clarification of the exact problems being observed by the operations team were also highlighted as necessary next steps. ## **Root Cause Analysis and Current Issues** The discussion centered on identifying why a significant number of bills were failing to be sent to UBM and why failed batches were causing system congestion. - **The Primary Suspected Cause**: The root of the immediate service outage for components like the directory watcher and "laser" services was traced back to an expired client secret, a problem that was recently rectified. This suggests a systemic issue with credential management. - **The Retry Loop Problem**: A critical flaw in the current system logic was confirmed: when an individual bill or batch fails during output, the system immediately and continuously retries it. This creates a processing loop that blocks the queue, as many failures require manual intervention and cannot be resolved automatically through retries. - **Investigation into "Blank" Bill Errors**: There is a separate but parallel investigation into why some bills appear with blank or zero values in the interface. Initial checks on one example bill showed correct data, indicating the issue might be a display bug or a misinterpretation of the data, rather than a data integrity problem at the source. ## **Backlog Composition and Prioritization Concerns** A significant portion of the meeting was dedicated to understanding the scale of the backlog and establishing rules for processing order to minimize business impact. - **Scale of the Backlog**: Estimates of the backlog varied, but consensus settled on approximately 1,000 bills in the "ready to send" queue for auto-output, with an additional number needing manual review. All items currently in this queue appear to have validation issues preventing automatic processing. - **Urgent Need for Prepay Bills**: A clear business priority was established: prepaid customer bills must be processed and sent to UBM ahead of historical or postpaid bills. There is a pressing backlog of 500-600 prepay bills for key customers that are currently delayed. - **Lack of Built-in Priority**: The existing output service operates on a simple first-in, first-out basis with no mechanism to prioritize prepay bills or deprioritize repeatedly failing items, which contributes to the current congestion. ## **Current System Status and Operational Impact** The team assessed the real-time state of the output service to determine immediate next actions. - **Output Service Currently Idle**: Following the credential fix, the output service has cleared its previous massive internal queue and is now in a state where it is not processing anything. This is because all bills currently marked "ready to send" are flagged with errors. - **Implication for Manual Efforts**: This status means that any bill needing correction requires a manual update by the operations team before it can be successfully processed by the automated system. The system will not proceed until these validation issues are resolved. - **Communication Gap on Symptoms**: There is confusion regarding the exact symptoms the operations team (Afton and Rachel) are experiencing. Reports of continuous reprocessing and missing error logs need to be clarified to confirm if they are seeing the retry loop issue or a different problem entirely. ## **Proposed Solutions and Implementation Trade-offs** Two potential solution paths were discussed, each with different timelines and levels of effort. - **Long-term Architectural Fix (Recommended)**: The favored solution is to modify the system's logic to move failed bills or batches out of the main "ready to send" queue and into a dedicated "failed output" queue. This would immediately halt the retry loop, allow for manual investigation, and prevent failed items from blocking the processing of other valid bills. - **Short-term Mitigation**: An alternative, simpler fix would be to implement a configurable delay (e.g., 24 hours) before the system retries a failed item. While faster to implement, this is seen as less robust than the dedicated queue approach. - **The Rollback Question**: The operations team has requested a rollback of the recent output service update. However, analysis indicates the update (changing from batch to individual bill processing) did not introduce the core retry logic flaw-the flaw existed in the previous version as well. A rollback is therefore viewed as unlikely to resolve the fundamental issue and may only provide a perceived change. ## **Immediate Next Steps and Coordination** The meeting concluded with a plan to gain clarity and unblock the highest-priority bills. - **Synchronize with Operations**: The immediate next action is to connect directly with the operations lead (Afton) to precisely define the issues she is observing, prioritize the list of problems, and ensure everyone is aligned on what needs to be fixed first. - **Test Manual Processing**: Now that the output service queue is clear, the operations team should test manually pushing corrected bills, particularly prepay bills, to verify that the basic send functionality is working and that they receive timely error logs. - **Focus on Prepay Backlog**: Concurrently, efforts should be focused on manually correcting and pushing the backlog of 500-600 prepaid customer bills as the highest business priority.
Backlog status and updates
## Summary ### Output System Update and Immediate Backlog Crisis A recent update to the system's output functionality has led to a critical processing bottleneck, where batches are taking hours to send instead of seconds. The core problem is that the output process is continuously and unsuccessfully trying to send thousands of bills with existing data errors, which blocks the manual output of high-priority transactions. This situation has created significant operational delays, particularly for urgent bill pay customers. ### Scale of the Backlog and its Operational Impact The current backlog in the "ready to send" queue is substantial, containing over **3,067** bills. This backlog is not limited to new transactions but includes a large volume of historical, postpaid, and project makeup bills that have been failing repeatedly. The overwhelming number of failing items makes it impossible to distinguish between bills that are merely queued and those that have genuine errors requiring manual intervention, severely hampering the team's ability to prioritize and process time-sensitive work. ### Root Cause: Proliferation of Data Errors The primary driver of the system blockage is a large accumulation of unresolved data errors on individual bills. Common issues include missing invoice dates, invalid service dates, incorrect data in charge lines, and forms populated with inactive meter information. Because the output process retries these failed bills incessantly, it creates a continuous loop that consumes system resources and prevents clear bills from being processed efficiently. ### Proposed Short-Term Solution: Implementing a Delay Buffer To immediately alleviate the pressure and clear the functional transactions from the queue, a **24-hour delay buffer for retries** was proposed. This solution would instruct the system to wait a full day before re-attempting to send a bill that has failed, allowing all error-free bills in the queue to be sent first. This approach would dramatically reduce the backlog volume in the "ready to send" state, isolating the bills with genuine errors for manual review and correction by the operations team. ### Next Steps and Coordination The immediate action plan involves coordinating with the developer to assess the fastest path to resolution, whether that is implementing the proposed 24-hour buffer or temporarily rolling back the recent update. The goal is to implement a fix that decouples the processing of valid bills from the error-correction cycle, restoring the team's ability to manually output priority batches and begin systematically addressing the root cause data issues.
Plan for Arcadia
## Summary The meeting focused on reviewing the status of recent system updates, troubleshooting several ongoing technical issues, and outlining immediate next steps for development and operational tasks. Discussions ranged from specific service failures to plans for architectural improvements and cross-team coordination. ### Recent System Updates and Deployment Status The team reviewed the results of a recent deployment and the current operational health of various services. The overall update was largely successful, with most services and virtual machines now running on the latest versions. A specific update was deployed for the Output Service, which included the significant removal of a dependency on the Lock Manager service. This change has important implications for future maintenance and automation efforts. ### Critical Issues and Troubleshooting Multiple high-priority technical issues requiring immediate investigation were identified and assigned. - **Directory Watcher Service Credential Error:** This service is experiencing a credential failure. Investigation revealed two secrets in the VM2 environment variables, but their origin within the codebase is currently unclear. The immediate plan is to examine the service's source code and refactor it to align with the credential management patterns used by other applications. - **Output Service Data Discrepancies:** Following its update, reports emerged of issues with specific bills. Initial log analysis suggests the problems may stem from invalid service dates, but this requires verification with specific bill IDs. A separate, potentially more complex issue involves bills apparently stuck in a system (DSS) for five days, where related data (bill line items, services) exists but the UI displays a zero or red bid count with blank active meters, necessitating deeper database investigation. - **Build Validation Problem:** A separate issue with build validation was confirmed to be a known problem with a planned fix. The resolution involves applying a cherry-picked fix from lower environments directly to production as a hotfix. ### Pipeline Automation and Architectural Refactoring Plans were set in motion to improve operational automation and untangle a legacy codebase. - **Output Service Restart Pipeline:** A simple automation pipeline to restart the Output Service will be created in Azure DevOps. With its dependency on the Lock Manager now removed, the service can be restarted independently, making this pipeline straightforward to implement. Testing will proceed with caution to ensure no other services are impacted. - **Repository Consolidation:** A longer-term architectural goal is to break out individual services from the monolithic "PDAS" repository in Azure DevOps into their own dedicated repositories. This is acknowledged as a significant challenge due to intricate, legacy inter-dependencies between libraries and applications within the current structure, described as a "spaghetti monster nightmare." ### External Service Integration and Support Progress on integrating with an external service, Arcadia, was discussed, highlighting a need for clearer external communication. - **Arcadia Service Implementation:** Development of the service for Arcadia is reportedly underway and "working one way." However, several specific questions regarding the integration's behavior remain, particularly about the conditions that trigger webhook notifications (e.g., new statement creation vs. account addition). To resolve these, a direct thread will be initiated with the external technical support contact to seek clarification. ### Operational and Administrative Coordination The meeting also covered routine operational procedures and an administrative hurdle. - **Credential Synchronization:** The regular process of syncing credentials between DS and UBM systems occurs at least weekly, sometimes daily. To improve team-wide visibility, it was agreed that notifications for each sync will be posted in a dedicated Slack channel, moving away from direct messages to a single individual. - **Infrastructure Access:** A team member encountered a blocked access issue due to an internal IP change enforced by Zscaler, a corporate tool. The resolution path involves contacting the relevant infrastructure team (BDE) to obtain a new IP whitelist for the big data image, with management standing by to assist if delays occur.
Plan for Arcadia
## Summary The meeting focused on operational updates, primarily concerning data cleanup in a system referred to as "DSS" and significant progress with the Arcadia platform. Key topics included the cleanup efforts, managing customer credentials with Arcadia, the development of internal reporting tools, and strategic discussions around the necessary documentation for transitioning customer management. ### Progress in DSS Cleanup **Significant cleanup was completed over the weekend, making the process more manageable.** The effort involved archiving outdated or irrelevant customer data to streamline the system and prevent unnecessary processing. The cleanup process was noted to be intuitive, especially when cross-referencing with another system (UVM) to resolve uncertainties, highlighting an efficient method for maintaining data hygiene. ### Arcadia Customer Updates **Progress is being made with Arcadia, including successful credential matching for several customers and systematic tracking of outstanding issues.** A new internal reporting system is being developed to track all credential requests and filter them by customer, mirroring but improving upon the Arcadia dashboard's functionality. However, challenges remain with matching certain municipal co-ops due to ambiguous information, requiring collaborative trial and error with the Arcadia team to resolve. ### Credential Management System **A priority is to establish a clear system to manage the transfer and version control of customer credentials to Arcadia.** The goal is to avoid sending outdated credential files repeatedly by implementing a live report that automatically removes customers once matches are confirmed. A critical pending decision is selecting the initial set of credentials to officially hand over to Arcadia for management, pending further internal alignment. ### LOA Challenges and Process **The Letter of Authorization (LOA) is identified as a critical blocker for Arcadia to begin full production management of customer accounts.** There is a recognition that while onboarding data can be shared, the LOA is a prerequisite for Arcadia to proceed effectively. Historically, some customers have been hesitant to provide LOAs immediately due to concerns about premature account changes, indicating a need for careful customer communication and timing. ### Onboarding Documentation Review **A review of the customer onboarding document is necessary to determine what information can be proactively shared with Arcadia.** The team acknowledges they should assess the onboarding documentation to understand exactly what data they possess and can provide to Arcadia in the absence of a signed LOA, aiming to advance the process as much as possible with the available information.
لقاء التخطيط الأسبوعي
Personal weekly planning session. Set gym schedule, deep work blocks, and reading goals. Low complexity but important for routine maintenance.
مكالمة العائلة الأسبوعية
Weekly family call covering health updates, upcoming March visit planning, and home renovation timeline. Personal call with minimal work relevance.
Plan for Arcadia
January all-hands for the Constellation org. Celebrated strong January metrics (47 PRs, 99.7% uptime). Announced Arcadia integration as Q1 priority. Addressed on-call burnout with rotation changes. Engineering blog launching in February.
Arcadia Migration
## Credential Management and Data Integration Challenges The discussion centered on significant operational hurdles related to credential management and data flow from Arcadia. A primary challenge involves manually managing thousands of customer credentials, many of which have issues preventing automated processing. The current strategy is a large-scale migration of all credentials to the Arcadia platform to consolidate and manage them more effectively, even though this reveals a high volume of problematic credentials that will require contractor support to resolve. A parallel effort is underway to develop a robust, internal credential management system to eliminate future manual tracking. Ensuring the quality and accuracy of data imported from Arcadia is also critical, with concerns about handling duplicate records, correct vendor and account tagging, and managing documents where key data points like statement dates are missing. - **Large-Scale Credential Migration to Arcadia:** All existing credentials are being loaded into Arcadia to centralize management, but this process has exposed that a significant portion have underlying issues that cause processing failures. - **Developing an Internal Credential Management System:** A long-term solution is to build a dedicated system for credential management to prevent reliance on manual spreadsheets and tracking, moving towards an automated, sustainable process. - **Quality Assurance for Incoming Arcadia Data:** The integration must carefully handle potential data problems, including de-duplication, ensuring vendor/account mapping is correct, and establishing rules for processing documents with incomplete information to avoid missing valid bills. ## Technical Hurdles and Solution Implementation Several specific technical blockers were identified that are impeding progress. A key issue was the use of an incorrect database table for vendor comparisons, which was creating discrepancies in data validation. The team resolved to switch to the correct `provider vendor` table. Furthermore, the implementation timeline for credential management updates has been delayed, partly due to a prior IP whitelisting requirement and the need to correct the vendor table logic, pushing the target completion to mid-next week. - **Correcting Vendor Data Source:** A critical bug was identified where the system was referencing a general `vendor` table instead of the more specific `provider vendor` table, leading to inaccurate data matching that must be corrected. - **Adjusted Timeline for Credential Updates:** Due to the vendor table issue and the team's schedule, the updates to the credential management workflow are now targeted for completion in the middle of the following week, allowing time for proper implementation and testing. ## Process Transition from DDS to DSS A major strategic shift involves moving key operational processes from the legacy DDS system to the newer DSS platform. The immediate focus is on transitioning bill editing and reprocessing functionalities. However, it's acknowledged that other critical tasks, such as initial customer and account setup ("summary builds"), will also need to be migrated. The transition is expected to reveal shortcomings in DSS's current capabilities that are not apparent until the system is under heavy operational use, necessitating further development. - **Prioritizing Core Function Migration:** The initial phase of the transition focuses on moving essential day-to-day tasks like editing and reprocessing bills into DSS to streamline operations and reduce dependency on the old system. - **Anticipating System Gaps Under Load:** It is expected that moving high-volume tasks to DSS will uncover functional limitations in its user interface and editing tools, requiring subsequent development sprints to address these gaps. - **Long-Term Roadmap for Full Transition:** Beyond immediate edits, the complete migration plan includes moving customer/vendor/account setup workflows and the final output service, which is currently a manual and brittle process within DSS. ## Onboarding Automation and Customer Self-Service Vision The conversation highlighted a critical strategic need to overhaul the customer onboarding process and move toward greater customer self-service. The current onboarding workflow is highly manual, relying on CSMs to interpret spreadsheets and manually enter data into DDS, which creates delays and potential for error. A future-state vision was outlined where onboarding and account management tasks could be initiated directly by customers or CSMs through a unified interface (potentially within UBM), creating a structured, auditable workflow that feeds directly into operational systems, thereby drastically reducing manual intervention and miscommunication. - **Eliminating Manual Onboarding Friction:** The goal is to replace error-prone manual data entry from spreadsheets with a structured UI that captures customer, vendor, and account setup information cleanly at the point of entry. - **Creating a Unified Workflow Interface:** The vision is for a single system where tasks like new account setup, bill submission, and service cancellation can be initiated, creating a trackable workflow instead of relying on unstructured email chains and manual handoffs. - **Reducing CSM Dependency for Simple Tasks:** Automating these processes is essential for scaling, as it frees CSM time from administrative tasks and prepares the platform for serving mid-market or self-serve customers who cannot rely on high-touch manual support. ## Operational Gaps and Manual Workflow Crisis A significant operational risk was surfaced regarding the complete lack of automation for handling customer-submitted documents. Currently, customers email bills and requests to a shared inbox, which requires manual monitoring and processing by team members who are already overloaded. This has led to requests being lost or severely delayed, damaging customer trust. Automating the ingestion and routing of these email submissions is now recognized as an urgent priority to prevent failures and improve reliability. - **Critical Breakdown in Email Processing:** Customer communications sent to a generic inbox are not being processed in a timely or reliable manner because the task is manual and low priority for overwhelmed staff, leading to lost requests and customer frustration. - **Urgent Need for Ingestion Automation:** The immediate solution is to implement an automated system that processes incoming customer emails, extracts attachments, and routes them into the correct processing queues without manual intervention, ensuring no request falls through the cracks. ## Planning for Structured Operational Updates To improve visibility and coordination across multiple ongoing projects (Arcadia integration, credential cleanup, DDS transition, automation work), the team agreed to establish a formal structure for weekly update meetings. The goal is to move away from ad-hoc discussions and instead provide consistent, trackable updates on each major initiative. This will help align the development team, operations staff, and management, and reduce the time spent clarifying the status of completed or irrelevant issues. - **Establishing a Consistent Reporting Rhythm:** The team will implement a weekly update meeting with a standardized format to review progress across all active development and operational projects. - **Improving Cross-Team Visibility:** A structured update, potentially using a shared slide or table, will help keep all stakeholders informed on priorities like Arcadia metrics, credential cleanup status, and key ticket resolutions, minimizing misalignment and repetitive questions.
Daily Meeting
## Missing Bills and Final Bill Identification The meeting opened with a critical operational issue regarding the identification and processing of "final bills" in the invoicing system. A key problem was that when a vendor submits a final bill, it is not always being properly flagged and closed within the system, causing ongoing accruals for accounts that should be closed. The primary reasons were twofold: the system requires a specific field to be marked or a button to be clicked, and not all vendors explicitly label their invoices as "final bill." - **Systematic identification challenges:** While utility distribution bills often state "final bill," supply-only bills (e.g., from Constellation) typically do not. This makes automated detection difficult. - **Proposed solution with AI:** To address this, a plan was formed to train a Large Language Model (LLM) to automatically identify final bills. The model will be trained on examples, looking both for explicit text and contextual clues like unusually short service periods. - **Immediate action:** A specific error code (3034) in the system will be used as a source for examples to train the AI model, aiming to integrate this validation into the upcoming DSS application. ## Virtual Account Cleanup Efforts Significant time was devoted to ongoing cleanup efforts for misaligned or "virtual" accounts across different systems. A substantial backlog has developed, with about 20 customer requests pending, a number that grew during a recent industry conference. - **Prioritized customer list:** Cleanup is proceeding on a customer-by-customer basis using a priority list. Initial work has been completed for specific clients like St. Elizabeth and Munson. - **Manual alignment process:** The current method involves exporting data from different systems and manually aligning account numbers, which is time-consuming. A suggestion was made to improve future alignment by never editing an account in the source system once created, instead inactivating it and creating a new one for clarity. - **Metrics reporting:** A request was made to generate and provide weekly metrics on the progress of moving and cleaning up accounts to track the effort's effectiveness. ## DSS Bill Validation Page A major update was presented on a new "Bill Validation" page within the DSS application, which is now live and ready for use. This tool is designed to intercept and correct billing errors *before* they are forwarded for manual processing, aiming to drastically improve efficiency. - **Addressing a large backlog:** Currently, there are approximately 1,200 bills in the validation queue that require review. This backlog is a contributing factor to some missing bills. - **Operational shift required:** The operations team is expected to start using this new page daily from the coming week. The goal is to clear the initial backlog and then maintain it, preventing new bills from piling up. - **Efficiency gain:** Correcting data in DSS is significantly faster than the previous process of handling errors in the downstream DDS system, where all data had to be re-entered manually. Training videos will be provided to facilitate the transition. ## Credentials and Data Alignment The team discussed problems with outdated or missing login credentials for utility portals, which prevent automated bill retrieval. - **Discovery of a new data source:** It was found that current credentials can be viewed within the data services system, providing a starting point for verification. - **Systematic verification plan:** The new process will involve checking these credentials against the master records in SmartSheets and proactively reaching out to customers for updates, especially since many change requests from customers have reportedly gone unanswered for months. - **Resource allocation:** New and existing contractors will be brought into the effort to help clean up the credential list and assist with other data cleanup tasks. ## Project Updates and Operational Changes Several other key projects and system changes were reviewed. - **BP Project Go-Live:** A major project for BP is scheduled to go live at the end of the following week, involving roughly 2,500 meters. A concerning issue was raised about potentially missing credentials for a portion of these accounts. - **Account Linking Process:** For BP and similar projects, there is an ongoing effort to properly link accounts in the system without immediately triggering bill pulls. This process is intended to confirm credentials work and ensure data flows correctly into all downstream systems. - **Issue Tracker Launch:** A new company-wide issue tracker tool is in its final stages and is expected to launch with a training video early in the coming week, followed by a renewed focus on onboarding processes.
Infrastructure Cost Optimization
Infrastructure cost optimization review targeting the $14.2K/month AWS bill. Identified $3.2K in immediate savings from unused instances. Planned database right-sizing and spot instance implementation. EKS migration deferred to Q2.
DSS/UBM sync
## Summary The meeting covered two distinct but critical operational topics: a technical investigation into a data filtering discrepancy within an audit system, and a detailed discussion on aligning credential and vendor data across different platforms. ### Investigation of DSS Audit Queue Filtering Issues A significant discrepancy was discovered when filtering records using different statuses in the DSS audit system, leading to an investigation into data integrity and UI behavior. The core issue revolved around the system displaying inconsistent record counts when filtering by "DSS Audit Queue" versus "Failed DSS Audit Queue," with a difference of approximately 56-57 records. This pointed to potential problems in how statuses are defined and managed. - **Status List Management and Data Source:** The root cause was traced to how status lists are populated in the user interface. It was clarified that these should be dynamically pulled from a Cosmos database container, not statically defined in the code. The team was instructed to query the Cosmos records for unique values from both the `status` and `display_status` fields to build an accurate, clean list. - **Identifying the Record Discrepancy:** The investigation revealed that the "Failed DSS Audit Queue" filter includes an additional status, "failed DS enrichment complete," which accounts for the missing records. It was emphasized that understanding these specific records is vital, as the new system's goal is to allow operators to review and update such builds efficiently. - **Action Plan for Resolution:** The immediate action is to clean up duplicate or legacy status names in the UI's source list. The team will query the Cosmos database to fetch all unique statuses, remove outdated entries, and ensure the filter accurately reflects the live data. ### Credential Management and Vendor Data Alignment The discussion focused on the complexities of managing vendor credentials across the Data Services (DS) and UBM platforms, highlighting gaps in current mapping processes. The primary objective is to ensure that billing account credentials are consistent and accurately matched between the two systems. - **Data Structure and Account-Level Credentials:** A key clarification was that in the UBM system, credentials are stored at the *billing account level*, not the vendor level, and a single vendor can have multiple accounts. This structural difference is central to any matching logic. - **Analyzing the Matching Logic:** Questions were raised about the logic in an existing spreadsheet used for credential matching. Confusion existed around the purpose of two specific flags (`credits present in both` and `account spread in both`), as they appeared to be identical in the sample data. The team needs to determine if account number matching is a prerequisite for credential validation. - **Ownership and Clarification Needed:** The creator of the spreadsheet was identified as the point of contact for clarifying the intended matching logic. The team will conduct further exploration on the test server to understand how the data is currently being filtered and validated. ### Vendor Synchronization and Mapping Between Systems A major challenge identified is the lack of a formal, real-time mapping table for vendors between the Data Services and UBM systems, creating a potential for inconsistent data. - **Unidirectional Vendor Creation:** The process flow was clarified: UBM does not create vendors independently. Vendors are only added to UBM either through a sync API from Data Services or automatically when a new bill is imported for an unknown vendor. Consequently, UBM cannot have a vendor that doesn't exist in Data Services, but the reverse is possible if syncing is delayed. - **Need for a Formal Mapping Table:** It was confirmed that a dedicated mapping table between TS (potentially a third system) and UBM for customer/vendor data does not currently exist and is a required piece of work. The recent fix to the vendor sync API was noted, but its deployment status and effectiveness in production remain to be fully verified.
Constellation bills
## Current Task Management System in FDG Connect The meeting began with a review of the existing functionality. Currently, tasks in the system are generated based on a simple, vendor-level "cycle date" configuration. This means a task is set to reappear on a fixed schedule, such as every 30 days, regardless of the actual bill availability. The core function of a task is to notify data services operators that a PDF bill is ready for download from a vendor portal so they can push it to the data system. However, this rigid approach is often inaccurate, leading to inefficiency as operators must manually "snooze" tasks repeatedly when bills are not yet available, adding unnecessary work. ## Proposal for an Advanced Two-Tiered Configuration To solve the limitations of the current system, a new two-tiered configuration model was proposed. This model introduces greater flexibility and accuracy by allowing configurations at two distinct levels, moving beyond the single vendor-level setting. - **Vendor-Level Configuration:** Users can set rules that apply to all accounts associated with a specific vendor. For example, if a water utility consistently posts bills on the 15th of each month, that expected posting date can be set once for the entire vendor, streamlining task scheduling for all related accounts. - **Account-Level Configuration:** This provides an additional layer of control. Users can define scheduling rules for individual accounts. This is crucial for handling exceptions or accounts with billing cycles that differ from their vendor's standard pattern. ## Configuration Types: Expected Posting Date vs. Cycle Days The proposal details two primary methods for scheduling when a task should reappear after completion. - **Expected Posting Date:** This is a fixed calendar date (e.g., the 7th or 25th of each month). It is ideal for vendors with highly predictable, date-specific billing schedules, allowing tasks to be generated precisely when bills are expected to be available. - **Cycle Days (Offset):** This method calculates the next task date based on the previous bill's service date. A user-defined number of days (e.g., 30, 45, or up to 180 days) is added to that date to determine when the next task should appear. This offers flexibility for billing cycles that are regular but not tied to a specific calendar date each month. ## Rules of Precedence and Independence A key principle of the new design is the clear hierarchy between configuration levels. The account-level configuration will always **take precedence** over any vendor-level setting. This ensures that specific rules for an individual account can override a more general vendor rule. Furthermore, the configuration levels are designed to be independent; a user can modify settings at the vendor level without being forced to change account-level settings, and vice-versa, providing maximum administrative flexibility. The existing basic "cycle date" setup will remain as a foundational fallback option.
Operations Sync Review
## Customer The customer is a team responsible for processing and managing utility and vendor bills for a large organization, operating within a data services framework. Their core function involves ingesting, auditing, and processing bills from multiple sources (web portals, email, mail) to ensure timely and accurate payments. They manage a complex vendor and account structure, requiring meticulous tracking of billing cycles and posting dates. Their role is critical for financial operations, focusing on efficiency, accuracy, and reducing manual workload through automation. ## Success The most significant success achieved involves the design and planned implementation of a sophisticated, two-tiered system for managing expected bill posting dates. This solution provides unprecedented flexibility and control. At the foundational **vendor level**, a default posting schedule (either a specific calendar date or a cycle day offset) can be set, applying uniformly to all accounts under that vendor. The major advancement, however, is the ability to **override this schedule at the individual account level**. This granular control is crucial for handling exceptions and vendors with non-uniform billing cycles across their customer base. This feature directly addresses the core operational need to predict when bills will be posted to portals, thereby eliminating the need for teams to manually check hundreds of accounts daily and significantly boosting efficiency. ## Challenge The primary and most persistent challenge is the high volume of duplicate bills, which creates a substantial manual burden and hampers operational efficiency. Duplicates originate from several overlapping sources: accounts enrolled in automated web downloads while also receiving bills by mail, multiple runs of download bots, and technical issues where tasks repopulate in the helper tool. The problem is exacerbated by flaws in the duplicate resolution system itself. The system's reliance on "header information" (invoice date, amount) being populated to correctly identify duplicates means that many legitimate duplicates appear as "unknown," forcing manual investigation behind the scenes. Furthermore, pixel-to-pixel checksum matches do not verify if the original bill was successfully processed, potentially masking underlying data issues. This convoluted process makes duplicate resolution so time-consuming that it often falls to a limited number of specialists. ## Goals The customer's strategic goals center on enhancing automation, improving data visibility, and refining existing systems to reduce manual intervention. Key objectives include: - **Eliminating Manual Email Processing:** Automating the ingestion of bills sent via numerous email aliases and inboxes directly into the processing system, removing the need for staff to manually download and upload PDFs. - **Developing a Gap Reporting Tool:** Creating a dedicated report to easily identify missing bills for specific customers, vendors, and accounts across historical periods, rather than relying solely on the most recent bill data. - **Streamlining the Pre-Audit Process:** Utilizing a new dedicated "Bill Validation" page to efficiently review and correct bills stuck in the system due to data extraction ambiguities, accelerating overall processing. - **Optimizing Source Selection:** Strategically unenrolling accounts from automated web downloads when bills are consistently received reliably and timely via mail to prevent duplicate sources. - **Refining Duplicate Detection Logic:** Improving the system to more accurately flag and resolve duplicates, particularly by ensuring the resolution interface reliably displays both PDFs for comparison.
Arcadia Discussion w/ Derek Cox
Faisal <> Sunny
## **Summary** The meeting focused on improving and streamlining operational processes for bill processing, credential management, and system automation. Key discussions revolved around integrating new data sources, fixing foundational data issues, and establishing new workflows to reduce manual intervention and reactive operations. ### **Arcadia Bill Processing and Data Integration** The core discussion centered on the ongoing integration with Arcadia for automated bill retrieval. While initial data pulls have been completed, the team is grappling with the lack of robust tracking and metadata from Arcadia, which complicates identifying what has been successfully processed. The immediate plan is to continue periodic, filtered data pulls from Ascension, accepting some risk of duplicates, while working to implement a more permanent webhook solution with Arcadia for future data. The goal is to establish a process where Arcadia consistently provides metadata, reasons for failures, and processed data, shifting the team from a reactive "catch-up" mode to a proactive management stance. ### **Operational Workflow and DSS App Enhancements** A significant operational shift was planned to clear a backlog of bills stuck in the DSS application. Over 300 bills are currently awaiting operator review due to failed automated checks, such as missing dates. To address this, a new dedicated page within the DSS app will be created to centralize this review work. DS operations staff will be tasked with actively reviewing and correcting these bills within the DSS system itself, rather than pushing them to another system, to preserve data integrity and audit trails. Training and rollout for this new workflow are scheduled for the following week. ### **Vendor Code Mapping and Data Integrity** A critical data integrity issue was identified regarding vendor code mismatches between the DS and UBM systems. Only 4,200 out of 10,000 DS vendor codes have corresponding matches in UBM, which is causing validation failures and requiring extensive manual fuzzy matching. This gap undermines the ability to accurately process bills and match credentials. The immediate action is to verify the existence of a master mapping list and task the team with creating a definitive and accurate mapping between DS vendor codes, UBM vendor codes, and Arcadia provider IDs. Access to UBM tables has been provisioned to facilitate this work. ### **Credential Management for Arcadia** The process for managing customer credentials sent to Arcadia was reviewed. A major challenge is the manual and time-consuming review of credential accuracy before submission, a process that is not scalable. With approximately 9,000 unique credentials in the system, the team is evaluating how to best transition credential management to Arcadia's platform, for which they are already paying. The focus is on establishing a process where errors are flagged back to the team for correction and ensuring that credential changes over time are tracked and managed systematically, moving away from one-time CSV uploads. ### **Automation of Email Invoice Processing** The need to automate the processing of invoices received via email was highlighted as a key opportunity to eliminate a manual, reactive step. Bills are currently forwarded to a shared mailbox where they can be overlooked, creating backlogs. The proposal is to implement an automation that monitors specified email inboxes, extracts PDF attachments, and feeds them directly into the DSS processing queue, similar to how FTP files are handled. This would provide visibility into the volume of email-sourced bills and ensure they are not forgotten, requiring confirmation of access rights to the relevant email accounts. ### **Build Template System Overhaul** The conversation detailed a major initiative to overhaul the "build template" system, which defines how bill data is interpreted. Currently, the system allows AI to create new templates when none exist, leading to inconsistencies. The new strategy is to make the build template a mandatory, dynamic instruction set for the AI. For new customers, operations staff (DS Ops) will set up accurate templates based on initial bills, possibly with AI-assisted generation for a first draft. Crucially, the system will be changed so that if no template exists, processing stops for manual assignment, ensuring data quality. Enhanced capabilities to add, edit, and audit template changes are being implemented to give DS Ops full control and visibility. ### **SOC2 Compliance Kick-off** The kick-off call for the SOC2 compliance project was held. The process will involve collaboration with the parent company's (Constellation) team, as the compliance framework rolls up to the larger organization. Initial steps involve setting up necessary accesses and clarifying the scope, including confirming which privacy policies apply to the various platforms in use.
[EXTERNAL]FW: [EXTERNAL]Constellation Navigator - SOC Report Touchpoint
## **Summary** This was a discovery meeting focused on preparing a financial software platform for SOC 2 compliance. The primary goal was to gain a comprehensive understanding of the company's product, technical infrastructure, existing security practices, and organizational context to inform the compliance roadmap. ### **Company and Product Overview** The discussion centered on the Utility Bill Management (UBM) platform, a service offering under the Constellation Navigator umbrella. The platform provides analytics and end-to-end bill payment management for commercial clients with multiple locations, such as retail chains, labs, and gas stations. Founded around 2017-2018 and later acquired, the company has a team of approximately 30 people and primarily serves customers in the United States. - **Service Model:** The platform operates on a dual-service model: delivering analytics on utility consumption and costs, and managing the complete payment process for customer utility bills. - **Data Handling:** The system processes customer data (names, addresses) and financial data related to bill amounts and payment methods, but does not store sensitive payment card information (PCI data). Payment execution is handled by a third-party service. ### **Compliance Framework and Initial Steps** The conversation established the foundational approach for the SOC 2 readiness project, noting parallels with other entities under the same corporate umbrella. - **Policy Leverage:** It was confirmed that UBM would leverage existing policies from Constellation Navigator where possible, rather than creating entirely new ones, to maintain consistency and efficiency. - **Documentation Repository:** A key next step is the creation of a centralized document repository (e.g., Google Drive or Confluence) owned by UBM. This will house existing policies (including those from a prior SOC 1 audit), serve as a workspace for developing SOC 2 policies, and store work papers like risk assessments and tabletop exercises before they are uploaded to the Vanta GRC platform. ### **Technical Architecture and Hosting** The technical stack is hosted on Google Cloud Platform (GCP), with a planned migration to Azure in the future. The architecture is modern and containerized. - **Cloud Infrastructure:** The entire application runs on GCP, utilizing Kubernetes for container orchestration, cloud functions for serverless compute, and VMs for databases. Infrastructure provisioning is primarily manual, with some Terraform used for disaster recovery setups. - **Data Storage and Encryption:** Customer data is stored in databases within GCP. The default GCP encryption mechanisms (AES-256) are used for data at rest, with no additional custom encryption layers applied. - **Disaster Recovery:** A disaster recovery strategy is in place, involving the ability to spin up an entirely new environment in a different US region if needed. ### **Security Practices and Controls** The team detailed several existing security measures, highlighting areas of maturity and identifying potential gaps for the SOC 2 audit. - **Identity and Access Management (IAM):** Employee access uses Microsoft 365 (via Constellation) with multi-factor authentication (MFA). The application itself uses Auth0 for customer authentication, with configurable password policies (complexity, history). User provisioning and deprovisioning follow a formal, ticket-based approval process. - **Secrets Management:** Secrets for the application are managed within GCP's Secret Manager, while infrastructure secrets (e.g., database logins) are stored in 1Password. - **Secure Development Lifecycle (SDLC):** A robust SDLC is followed, using GitHub with branch protection rules. Code requires peer review via pull requests, passes through dedicated development, QA, and staging environments, and only specific personnel can merge to the production branch. Static code security scanning is performed using Checkmarx. - **Network and Application Security:** Network firewalls and a recently implemented Web Application Firewall (WAF) are in place. However, there is no dedicated Intrusion Detection/Prevention System (IDS/IPS), and security-specific alerting for anomalies is not currently configured. - **Vulnerability Management:** Regular vulnerability scans are conducted: Checkmarx scans code, and Constellation performs external vulnerability scans (e.g., using Tenable) on a monthly basis, with defined SLAs for remediating findings based on severity. A penetration test for the web application is scheduled to begin imminently. ### **AI Integration and Third-Party Services** The platform incorporates AI in specific, focused areas rather than as a generative AI chatbot. - **AI Usage:** Artificial intelligence is used internally for a core function: ingesting utility bills and automatically extracting data from them. Oversight for this AI/data processing function falls under the data services team. - **Key Vendors:** The service relies on several third-party providers critical to its operation, including Google Cloud Platform (hosting), a third-party payment processor (Subsidiary API), Power BI for reporting, and Auth0 for customer identity. These vendors will be subject to due diligence as part of the SOC 2 process. ### **Organizational Context and Governance** The discussion clarified the company's structure and existing compliance certifications, which provide a strong foundation. - **Corporate Structure:** UBM operates under the Constellation Navigator umbrella and does not have its own separate board of directors or dedicated roles like a Chief Privacy Officer; these functions are managed at the parent company level. - **Existing Certifications:** The development team's parent company is ISO 27001 certified, and the team undergoes regular internal audits as part of this certification, indicating an existing culture of compliance. - **Device Management:** Employees use company-issued devices (from Cognizant) with enforced security policies, including Microsoft Defender antivirus and web filtering tools. Access to corporate networks requires a VPN (GlobalProtect).
Cleanup and Validation
## **Summary** The meeting focused on technical progress and planning for several key system integrations and feature developments. Discussions revolved around data cleanup tasks, building new microservices for third-party integrations, optimizing database architecture, and rolling out new validation interfaces. The primary goal was to ensure these components could be completed and deployed efficiently to unblock critical business processes. ### **Ongoing Data Cleanup and Service Tasks** Progress was reviewed on essential data cleanup and the creation of a new service for document processing. Completing these tasks is critical for measuring the impact of recent development work and for enabling new automation. - **Line Item Cleanup:** A process to clean up specific data is pending a final review of database queries before it can be run. This cleanup is a prerequisite for accurately measuring the success of recent work on build templates. - **ARCADE Integration Service:** Development is underway for a new service that will automatically process incoming configurations. The service's role is to trigger a queue, call an external (ARCADE) endpoint with statement data, download the resulting PDF, and upload it to the DSS system. The estimated completion is within the next day or two. - **Other Customer Data Uploads:** There was a question about whether to also process PDF uploads for other customers like Victra and Medline. The decision was to prioritize the core cleanup task first. ### **Database Optimization for Credential Validation** Significant work has been done to restructure the database supporting a credential validation page, resolving performance issues and preparing for new functionality. - **Data Consolidation:** All data from the DSS and UBM databases has been successfully moved to a single, separate table. This fundamental change has resolved previous pagination problems and centralized the data. - **View Creation for Matching Logic:** Multiple database views have been created to logically separate UBM and DSS data. This structure is intended to optimize the upcoming credential matching logic. - **Filter Count Optimization:** A view was created to store counts for each filter option (similar to a shared spreadsheet), but its performance is currently slow. Active work is being done to optimize the query speed before moving forward. ### **Vendor Mapping and Deployment Hurdles** A missing data mapping was identified as a potential future obstacle, and a deployment blocker was addressed. - **Vendor Mapping Gap:** For the validation page filters to work completely, a mapping table is needed to correlate vendor IDs between the DSS and UBM systems. While a similar table exists for clients, one for vendors appears to be missing or outdated. This is currently only needed for filtering, not for core validation functionality. - **Dev Server Whitelisting:** An outstanding issue preventing whitelisting of the development server's IP address was noted. The plan is to proceed with a deployment of recent database changes first to see if the whitelisting is still required for testing. ### **Build Validation Page Development** A new user interface page has been developed to help teams manage and validate "builds" that have failed automated checks. - **Page Functionality:** The new "Build Validation" page displays data for builds with a status of "DS Audit Failed" or "DS Audit Queue," and which are "In Process." It allows users to view details including the page name and account number associated with each build. - **Deployment Priority:** This page is considered a high priority for deployment to production, even if other features are not ready, as it will immediately help operators identify and address problematic builds. The deployment will require carefully cherry-picking the relevant code changes. ### **Reporting Dashboard Planning** Initial planning began for a new reporting dashboard to provide insights into system usage and volume. - **Dashboard Scope:** The planned page or report should display data related to vendors, accounts, and clients. A key metric to show is the monthly number of bills processed through the system, which indicates how often the ARCADE integration is being used. - **Implementation Path:** The initial version may be created in Power BI for speed, with a longer-term plan to potentially integrate it directly into the main application for broader access. ### **Logistical Planning and Next Steps** The meeting concluded with coordination on immediate deployment actions and schedule adjustments. - **Immediate Deployment Actions:** The build validation page will be deployed to a test environment first. Coordinating a production deployment will involve cherry-picking specific changes from the current development branch. - **Schedule Note:** An upcoming public holiday was noted, leading to the cancellation of the next scheduled meeting.
Report
## Summary This meeting focused on a decision regarding the user interface presentation for organizing different types of data or content within a system. The primary goal was to evaluate options that balance user convenience with long-term technical performance and sustainability. ### Interface Organization Options The central topic was determining the best method to segregate or view data currently shared in a common channel, such as Slack, for different purposes. Three main interface solutions were considered: creating a new tab, implementing a filter on the existing page, or developing a completely new report. The discussion acknowledged that while a new, dedicated tab might be the most straightforward and user-friendly option, its feasibility depended entirely on the technical effort and potential system impact. ### Performance as the Deciding Factor A clear and critical constraint emerged: the chosen solution must not adversely affect the system's performance. The sustainability of the implementation was prioritized over the specific type of interface change. - **Primary concern about new tabs:** It was explicitly questioned whether creating a new tab would create a negative "elastic performance impact," referring to the potential strain on database search and retrieval functions. - **Acceptable alternative - Filters:** Should performance be a concern, adding a filter to the existing page was deemed a perfectly acceptable and functional alternative. - **Flexibility on implementation:** The ultimate decision was framed as flexible, with no strong preference for a tab, filter, or separate report, as long as the performance criterion was met. The core requirement was to ensure any new feature is sustainable and does not degrade the user experience through slow load times or system lag.
Constellation bills
## Summary The meeting focused on presenting and exploring potential visual interface designs for monitoring billing data at a port, specifically revolving around two key conceptual views for tracking received utility bills. ### Overview of a Potential Port Interface The discussion centered on conceptualizing a specialized dashboard or view for monitoring billing information at a port facility. ### Conceptual Design: Monthly or Daily Billing View One proposed design is a calendar-based view for tracking when specific bills are received. The primary concept is a **monthly view**, though a daily breakdown was also shown as a more complex example. This view would visually group or categorize different bills, providing an at-a-glance understanding of billing activity over time. The exact implementation details of the grouping were acknowledged as less critical at this exploratory stage. ### Conceptual Design: Account-Level Running List An alternative or complementary design presented is a **running list interface** organized by account number. This view would serve as a direct lookup tool to answer specific questions about bill receipt for any given account. Its core function would be to clearly indicate the status of a bill from a particular provider (e.g., Arcadia) and, if received, display the date of the most recent bill. This concept is seen as a potential new presentation format for information currently found in existing reports, specifically the "combined lab late alignment bills report." ### Purpose and Next Steps The presentation of these concepts was intended to initiate a collaborative review and brainstorming session. The goal is to evaluate these interface ideas, gather initial reactions, and determine if they warrant further exploration and development to improve data visibility and monitoring workflows for port billing information.
Task Update
The provided transcript fragment contains only a brief, introductory statement. There is no substantive meeting content to summarize, as no discussions, decisions, presentations, or topics were recorded beyond an initial greeting.
Arcadia Integration Plan
## Summary The meeting centered on the ongoing integration of Arcadia as a new third-party data provider for automated utility bill retrieval. The discussion outlined the current state of the pilot, identified key operational challenges, and defined the necessary processes and system adaptations to ensure a successful, scalable implementation. The primary focus was on establishing clear ownership of data, building robust tracking mechanisms, and ensuring seamless coordination between internal systems and the external Arcadia platform. ### Understanding Arcadia's Role and Scope **Arcadia functions as underlying data infrastructure, not a full energy management solution.** The platform is designed to retrieve PDFs and JSON data from utilities using provided credentials, but it lacks the business logic and context (like customer or vendor mapping) that internal systems possess. - **Infrastructure, Not Management:** Arcadia's role is strictly to act on credential and account number pairs provided via API. It does not understand the broader customer or portfolio context, meaning all meaning and data relationships must be managed internally. - **Credential Dependency:** Success is entirely dependent on the quality and accuracy of the credentials supplied. Arcadia will return errors for invalid logins, but diagnosing and fixing the root cause (e.g., which internal system is wrong) is an internal responsibility. - **Historical Data Gap:** The system is designed for forward-looking, periodic bill retrieval. It is not intended to intelligently identify or backfill historical data gaps, which remains an internal task. ### Credential Management and System Synchronization **A major immediate challenge is reconciling and cleaning credential data across multiple internal systems before passing it to Arcadia.** The goal is to establish a single, reliable source of truth for credentials. - **Current Fragmented State:** Credentials currently exist across UBM, Data Services (DS), and Smartsheets, often with mismatches. A new, centralized "credential helper" application is being developed to identify discrepancies (e.g., different usernames/passwords for the same account across systems). - **Manual Cleanup Phase:** Initially, mismatches will be exported and assigned (e.g., to contractors) for manual investigation and correction. The corrected credential will then be used for the Arcadia API call. - **Long-term System Sync:** A bi-directional sync between UBM and DS is on the roadmap to eliminate multiple sources of truth. A decision is needed on sunsetting or formally integrating Smartsheets to prevent future data drift. - **New Account Onboarding:** The contract includes Arcadia's help in setting up credentials for new utility accounts. A process must be defined for how these newly created credentials flow back into internal systems to avoid creating a fourth, separate credential silo. ### Tracking, Monitoring, and Accountability **Building internal dashboards and logs is critical to monitor performance, identify failures, and hold Arcadia accountable to SLAs.** Relying solely on Arcadia's portal is insufficient for operational management. - **Beyond the Arcadia Dashboard:** An internal tracker is being built to augment Arcadia's data with internal context (customer, vendor, expected bill dates). This allows for filtering, prioritization, and a clearer view of what percentage of a customer's portfolio is covered. - **Status and Error Mapping:** The integration must categorize why a bill wasn't retrieved: whether it's an internal credential error, an Arcadia-process error, or because the vendor is not supported by Arcadia. Each category requires a different recourse action. - **SLA Enforcement:** To hold Arcadia to its performance SLAs (e.g., 95% of bills within 5 days), internal tracking must definitively show that correct credentials were provided on time and that the delay was on Arcadia's side. Weekly reviews of items aging past 10 days will be necessary. - **Source Attribution:** Bills retrieved via Arcadia will be tagged with their source within internal systems (like DSS and UBM). This is crucial for reporting, understanding coverage, and troubleshooting, though care must be taken to avoid exposing internal sourcing details to customers. ### Data Handling and Ingestion Workflow **The current workflow involves manual steps from API call to final bill ingestion, which will be automated while maintaining existing data quality controls.** - **Manual Pilot Phase:** Currently, PDFs retrieved by Arcadia are downloaded to a local directory and manually fed into the existing DSS ingestion pipeline. This allows the team to monitor for issues like duplicates or performance problems before full automation. - **Leveraging Existing Pipelines:** The long-term plan is for Arcadia-sourced bills to flow automatically into DSS, undergoing the same validation, extraction, and auditing processes as bills from other sources (like BDE or web downloads). No unique, post-ingestion controls for Arcadia data are currently planned. - **Unresolved Ingestion Challenges:** A known issue across all data sources, including Arcadia, is bills that arrive without recognizable account information ("missing client ID"). Solving this likely requires more advanced technical solutions and remains a longer-term challenge. ### Coverage Analysis and Operational Planning **Clear visibility into which utilities and accounts Arcadia can and cannot service is essential for day-to-day operations and resource planning.** - **Vendor Coverage Mapping:** A critical ongoing task is matching Arcadia's supported vendor list against the internal vendor master list. Any mismatch identifies accounts that will *not* be covered by Arcadia and must be fetched through traditional, manual methods. - **Account-Level Visibility:** Coverage needs to be visible at the account level, not just the vendor level, to catch edge cases (e.g., an account missing credentials). This ensures operational teams are not caught off guard by unexpected gaps. - **Influencing Arcadia's Roadmap:** By analyzing uncovered accounts, the company can request that Arcadia prioritize adding support for specific high-volume utilities, providing a path to increasing automated coverage over time. - **Pilot and Duplication:** During the pilot phase, tasks for bills potentially covered by Arcadia will be temporarily snoozed in the internal system to avoid duplicate work. The team acknowledged the need to be vigilant to ensure this does not cause bills to fall through the cracks during the transition.
Report
## Summary The meeting primarily focused on updates regarding current work progress, technical issues, and operational planning. Discussions centered on ongoing development tasks, particularly the assembly of unit tests and the resolution of a significant file reversion issue. There was also coordination around an upcoming system security update to avoid conflicts with other deployments. A notable point of investigation was raised concerning dashboard metrics showing unexpectedly high queue depths for PDF processing, which appear inconsistent with actual system throughput and will be examined further. ## Wins - Significant progress has been made in assembling unit tests, with most of the core coding already completed. - Help was provided to a colleague regarding a separate system task, demonstrating collaborative problem-solving. - Active work has begun on improving a specific report, with an analysis indicating its current inefficiencies stem from multiple contributors over time. ## Issues - A critical setback occurred when a key file reverted to a state from several days prior, requiring a full day of work to rectify. - Progress on the unit tests has been temporarily halted by a combination of a specific bug and project configuration challenges related to separating core functionality into a class library. - A major discrepancy was identified in the queue monitoring dashboard: the metrics for "found" items (PDFs) show a spike to 46,000 and a total near 148,000, which is orders of magnitude higher than the actual number of bills processed (~1,600) and does not align with observed system behavior. The query logic and data source for these metrics require immediate validation. - The team is awaiting crucial advice from a third party (Microsoft) to proceed effectively with the report improvement work. ## Commitments - **Test Development & Deployment:** The unit tests will be completed and pushed. The goal is to finish by the end of Thursday, with a firm commitment not to deploy on a Friday. - **Dashboard Metrics Investigation:** Ownership was taken to investigate the anomalous queue depth metrics shown on the dashboard by examining the underlying log queries and dashboard configuration. - **System Update Coordination:** Plans were made to schedule necessary Windows security updates for early next week (likely Monday) to ensure they don't interfere with other planned production deployments. A confirmation will be sought on Friday. - **Service Work:** Following the completion of a cleanup task, work will commence on a specific service, with an update to be provided on the chosen focus area.
Ascension - Linking and Pairing - new VAA and missing GL
## Summary The meeting began with a personal discussion about managing health issues like morning acidity, stress, and preventative herbal remedies. However, the core of the discussion focused on clarifying and resolving critical data integrity issues within a platform's account management system, specifically the concepts of **linking** and **pairing**. ### Health and Lifestyle Challenges A brief, initial conversation touched upon common modern health concerns, drawing connections between diet, stress, and hereditary factors. - **Acidity and Modern Diets:** Morning throat issues were linked to acidity, exacerbated by modern diets and fasting practices during events like Ramadan. Preventative measures, including specific herbal remedies and dietary awareness, were discussed as alternatives to solely treating symptoms. - **The Impact of Chronic Stress:** It was acknowledged that contemporary life involves significantly more stress and constant information consumption than previous generations, contributing to earlier onset of age-related health issues. However, practical solutions to mitigate this systemic stress were recognized as challenging to implement. ### Clarifying Account Linking A significant portion of the meeting was dedicated to demystifying the process of **linking** virtual accounts, a core function for maintaining historical data continuity. - **Purpose and Process:** Linking is used to consolidate historical data when the underlying components of a virtual account change. It involves connecting an older, closed account to a newer, active one under a single umbrella virtual account to preserve the complete history of a utility service, like a water bill. - **Critical Nuances and Challenges:** The process requires manual selection of the active account, and incorrect linking can lead to operational issues like wrongfully closed accounts. A key assumption is that account changes are permanent and won't frequently revert, an assumption that sometimes breaks down in practice. ### Understanding Account Pairing The discussion then pivoted to the distinct but equally important concept of **pairing**, which is specific to certain utility commodities to prevent data duplication. - **Purpose and Scope:** Pairing is exclusively for natural gas, electric, and lighting commodities in deregulated markets where supply and distribution are separate. Its sole purpose is to prevent double-counting of usage data by logically connecting a supply account with its corresponding distribution account. - **System Fragility and Data Integrity:** The platform was originally built on the assumption that both supply and distribution bills would always be present. This assumption has proven fragile, leading to inconsistent data labeling and unpaired accounts. Proper pairing is essential for generating accurate usage reports, yet it remains an underutilized and poorly understood feature outside of specific, manually cleaned customer datasets. ### Systemic Issues and Root Causes The conversation highlighted that the technical concepts of linking and pairing are hampered by broader, systemic upstream problems. - **Inconsistent Data Entry:** A major root cause of linking errors is inconsistent data entry in upstream systems, such as minor variations in meter ID formatting (e.g., extra spaces). This lack of standardization leads to the unnecessary creation of duplicate virtual accounts. - **Lack of Process Awareness and Consistency:** There is a significant knowledge gap among operators regarding fundamental industry concepts like the difference between "full service," "supply," and "distribution" accounts. This results in incorrectly labeled data from the point of entry, which corrupts all downstream processes, including pairing, and erodes trust in the platform's data. ### The Path Forward and Current Impact The meeting concluded by connecting these issues to real business problems and outlining the necessary focus areas for resolution. - **Immediate Cleanup Needs:** Manual cleanup activities for both linking and pairing are currently required to resolve data discrepancies, especially for newer customers who are experiencing the same historical issues. - **Long-Term Solution:** To prevent recurring problems, there is a critical need to establish consistent data entry standards upstream and to improve operator training on core platform concepts. The ultimate goal is to move from reactive cleanups to a systematic, reliable data onboarding process.
Missing Bills Report
## Summary The meeting focused on a critical data reporting issue related to tracking unpaid or "missing" bills within the company's systems. The core problem identified was that existing reports are polluted by internally created "mock" bills, making it difficult to accurately monitor the status of real customer invoices and identify breakdowns in the billing workflow. ### The Core Reporting Problem: Separating Real from Mock Bills The primary discussion centered on the need for a new or modified report that can accurately display late or missing bills by filtering out any bill marked as a "mock." The current "bills by account" report, used for payment purposes, correctly excludes mocks. However, for operational tracking, a separate view is required because mock bills create significant noise. This noise obscures visibility into whether a real bill was ever successfully acquired and processed in the first place, leading to potential gaps in the payment cycle. ### Understanding the Data Flow and the Point of Failure To understand the necessity of the new report, the complete billing data flow was outlined. The process begins in the data services system, where an actual bill (e.g., a PDF) must be obtained and processed. This processed bill is then sent to the Unified Billing Management (UBM) system. Finally, UBM forwards the bill for payment. The critical failure point occurs at the very first step in data services: if the actual bill is never retrieved or processed, it never enters the downstream workflow. However, teams sometimes create mock bills directly within UBM to placeholder or for other reasons, which masks this initial failure. ### Requirements for the Missing Bills Report The requested report must address the specific shortcomings of the current tools. The key requirement is the **exclusion of all mock or special bills** to present a clean view of only genuine, outstanding customer invoices. This report could be implemented as a new, dedicated view or as an additional tab within an existing reporting interface. The ultimate goal is to provide a reliable tool for tracking the completeness of the billing pipeline, specifically to answer the operational question: "Was this actual bill ever downloaded from the source and pushed to UBM?"
Review URA Cleanup
## Customer The customer is a client utilizing the platform for utility bill management and data analysis across multiple locations, such as hospitals and university facilities. Their background involves a significant volume of utility invoices, requiring accurate processing, validation, and reporting to gain insights into energy usage and invoice health. They are deeply invested in using the platform's analytics to drive operational decisions and report to their own stakeholders. ## Success The customer's most significant success with the platform is its **bill health reporting feature**, which they lead with in discussions with their own customers and use as a core analytical tool. This feature acts as a comprehensive pulse check for their entire invoice processing operation. It provides a high-level, visual "heat map" that quickly identifies data completeness and potential issues, such as missing invoices or linking problems, enabling them to proactively manage their utility accounts and demonstrate value. ## Challenge The customer's primary and ongoing challenge stems from **data inconsistencies originating from a third-party data service (DSS)**. The core issue is that utility data is delivered in inconsistent formats, leading to the erroneous creation of duplicate "virtual accounts" within the platform. This results in a cascading problem: the critical bill health reports display false negatives (showing red/errors where data may actually exist but is mislinked), manual cleanup efforts are perpetual, and the customer lacks a single, reliable source of truth for their utility data. This undermines confidence in the platform's core reporting functionality. ## Goals The customer's primary objectives center on achieving data reliability and process efficiency. - **Achieve a clean, accurate bill health report:** They need the platform's flagship reporting tool to reflect a true and accurate state of their data, eliminating false errors caused by systemic linking issues. - **Implement a permanent, systemic solution to data inconsistency:** They seek to move beyond endless manual corrections. The deployment of a standardized "bill template" is seen as the key solution to enforce consistent data formatting at the point of entry. - **Complete historical data cleanup:** While systemic fixes are pursued, there is a strong desire to rectify all past invoices to ensure the entire historical dataset is accurate and trustworthy. - **Streamline communication and onboarding:** There is a recognized need for a more unified communication strategy from the service provider and a desire to establish clear, successful onboarding templates and processes for any future locations or customers they bring onto the platform.
Arcadia Ingestion Plan
## **Summary** ### **Arcadia Implementation Status and Initial Results** The meeting centered on the ongoing implementation and initial production run of the Arcadia system for automated utility bill retrieval. The primary goal is to transition from manual web downloads to using Arcadia's API, starting with the Ascension customer. - **Initial Ingestion is Complete:** Approximately 1,600 bills for Ascension have been ingested and are processing through the pipeline. The team managed the OpenAI rate limits by staggering the submissions, expecting to complete the initial batch within a 24-hour window. - **Processing is Underway:** Many of the ingested bills have already been processed through the data system (DSS), with some waiting to be pushed to the final destination (UBM). A number of bills were flagged as potential duplicates and are awaiting manual review to confirm and link them correctly. - **Focus on a Controlled Start:** The discussion emphasized a cautious, controlled approach with the first customer (Ascension) to identify and understand system flaws before scaling to others, aiming to avoid repeating past issues experienced with other automated processes. ### **Critical Challenges in Bill Tracking and Deduplication** A significant portion of the conversation dealt with fundamental gaps in the current Arcadia workflow, particularly around tracking and ensuring data completeness. - **Defining and Identifying "Latest" Bills is Difficult:** A major hurdle is the lack of a reliable method to determine which bill is the most recent for an account. Arcadia's data lacks a consistent "statement date," forcing reliance on "due date" or "service period," which can be flaky and may pull in bills from previous months (e.g., December). - **Risk of Processing Old Bills and Missing New Ones:** If the system processes an old bill, it may incorrectly assume the account is up-to-date and snooze the download task for 30 days, potentially causing new bills to be missed entirely until the next cycle. - **Duplicate Management is a Manual Burden:** While some duplicates are caught, the process requires manual verification. The team is concerned about duplicates passing through to UBM and the lack of a scalable, automated resolution process, which creates ongoing operational overhead. - **No Clear Cut-off for Historical Data:** The team had to arbitrarily define a cut-off date (January 1st) for the initial historical pull, acknowledging this will capture some older bills and lacks precision. ### **Credential Management and Data Quality Hurdles** The discussion highlighted that the success of Arcadia is predicated on accurate underlying data, which currently has significant quality issues. - **Incorrect Credentials Block Automation:** A large number of account credentials stored in the core database are incorrect or outdated. Arcadia cannot process these, so they remain stuck, requiring manual intervention to research and correct the credentials-a task that currently lacks a dedicated owner or streamlined process. - **The "Last Mile" is Still Manual:** Even with Arcadia automating the PDF retrieval, several preparatory and cleanup steps are completely manual. This creates a bottleneck and means the promised efficiency gains are not yet fully realized. - **Data Issues are Intertwined:** Problems with credential management and bill tracking are deeply connected, making it difficult to isolate "Arcadia-specific" issues from general data quality problems that the team already struggles with. ### **The Need for Enhanced Visibility and Reporting** A key conclusion was the urgent requirement to build internal monitoring and reporting tools that sit outside of Arcadia's own dashboard to gain true operational control. - **Arcadia's Dashboard is Insufficient:** Its interface does not show customer context, expected vs. received bills, or lateness. The team needs a unified view to track what was sent to Arcadia, what was returned, and what failed (and why). - **Proposed "Arcadia Status" Dashboard:** The idea is to create a dedicated view that tracks, for each customer/account: credentials sent, Arcadia processing success/failure, the resulting bill IDs, and downstream processing status. This would clearly separate Arcadia-stage issues from other data problems. - **Visibility Precedes Full Automation:** The team agreed that building this visibility is a prerequisite for having the confidence to onboard more customers, even before a fully automated "service" is built. It is necessary to measure progress and identify specific, scoped problems. ### **Strategic Planning for Future Onboarding and System Evolution** The conversation looked ahead to the steps required to move beyond the pilot phase with Ascension and build a sustainable system. - **Pausing Further Customer Onboarding:** A decision was made to not immediately onboard the next customers (Park Ohio, BNB, Medline). The priority is to first analyze the complete results from the Ascension run, build the internal reporting dashboard, and understand the failure modes. - **Testing Webhooks for Future Automation:** Once credentials are verified, the plan is to test Arcadia's webhook feature on a few accounts. This would allow bills to be pushed automatically as they become available, moving away from manual polling and closer to a real-time service. - **Long-term Vision for a Scheduled Service:** The ultimate goal is to develop an internal service that automatically calls Arcadia's APIs on a schedule to fetch the latest bills, replacing all manual scripts and processes. - **Addressing Ancillary Issues:** It was noted that Arcadia's JSON output contains less data than the internal system's output, meaning for some bills, both the PDF and the JSON might need to be ingested in the future-an optimization problem to be tackled later. - **Brief Update on URI Linking:** The meeting ended with a brief update on separate URI linking issues reported by sales. This is a known, manual process for the data services team to resolve and is not currently affecting financial reporting, but it remains a point of external friction.
January Status Review
## **Summary** This meeting served as a status check on the deliverables scheduled for January and a preliminary look at the plans for February and March. The primary focus was on the progress of several key reports, resolving underlying data discrepancies, and adjusting timelines based on current priorities and newly emerged, significant work. ### **Progress on January Deliverables** The discussion centered on confirming the status of several items due for completion in January, with mixed progress noted. - **Uncashed Check & Data Validation Reports:** Active work is underway to reconcile data discrepancies. The immediate plan involves comparing a new uncashed check report from the pay system against existing Power BI data to identify and call out mismatches, a task expected to be completed within the month. - **System Status & Review Rules:** The timeline for the system status report is under review and may be deferred to February due to shifting priorities from leadership, despite technical feasibility. The review of rules has seen minimal activity in January but is still anticipated to be reasonable for a February completion. - **Data Validation Errors:** The need for this specific deliverable is expected to become obsolete. A new invoice/build template currently in testing is projected to eliminate the underlying issues that necessitated this work, effectively resolving the task. ### **Report Format and Finalization** A clarification was sought on the preferred platform for delivering the finalized reports, leading to a decision on the workflow. - The team confirmed that the immediate priority is validating the correctness of the report logic and data, not the delivery platform (Appsmith vs. Power BI). The validation and comparison will be conducted within Appsmith first. Once the report is verified and signed off, it can be migrated to Power BI for broader accessibility, contingent on resolving any license access issues for the development team. ### **Planning for February and March** The roadmap for the upcoming months was reviewed, with adjustments made based on the January progress and new information. - **February Focus:** Key items include continuing the work from January, developing a holistic UBM view account, gaining clarity on MFA downloads, and addressing specific customer issues. - **Scope Reduction for March:** Work planned for March on integrity and interpretation layouts is still expected but will likely involve a greatly reduced scope. This reduction is due to other solutions in progress that will address part of the original need, making the remaining work more manageable within the timeline. ### **Introduction of Unplanned Major Work: Arcadia Integration** A significant, unplanned effort was highlighted as a major current focus, impacting the team's capacity. - A new, high-impact project to integrate with a third-party solution, Arcadia, has begun in mid-January. This integration is a strategic shift to address challenges in downloading builds on time, moving the process from an in-house operation to a third-party service. This effort is consuming substantial resources and is expected to continue through February, which has prompted its formal recognition as a separate line item in planning due to its scope and executive interest.
Daily Progress Meeting
## Summary The meeting centered on critical project timeline adjustments, unresolved requirements blocking progress, and the need to formally reset expectations with an external team. The core issue involves delaying a Go Live date due to missing information, while parallel workstreams for account setup and legal documentation continue. ### Project Timeline and Go Live Date Reassessment A major decision was made to formally push back the project's Go Live date due to unresolved requirements from another team. The original date of March 31st is now considered unachievable because essential information needed for development has not been provided, despite prior warnings. - **Formal Date Change Required:** It was agreed that continuing without a new date creates an unrealistic expectation to hit the original deadline. The team concluded that providing a new, caveated Go Live date is necessary to draw a line in the sand and manage stakeholder expectations. - **Proposed Timeline Adjustment Process:** To protect the development team, a new task for "Scope Confirmation" will be inserted into the timeline. This allows for a dedicated period (suggested as one week) to assess any finally received requirements *before* committing to a development duration. The requirements gathering phase itself was proposed to be extended to November 30th. - **Communication Strategy:** The updated timeline, showing the pushed-out dependencies, will be presented on an upcoming Thursday call with the broader group. A courtesy heads-up via email may be sent beforehand to a key stakeholder, referencing a prior conversation about potential delays. ### Requirements and Development Blockers for Reports Progress on specific reports is completely stalled, as development cannot begin without critical answers from the signing team. Three outstanding requests have gone unanswered, preventing accurate estimation of the work required. - **Unanswered Requests Halt Progress:** The development team cannot confirm if the original 30-day development estimate is valid or even if the requested features are technically achievable due to the lack of detailed requirements. This uncertainty is the primary driver for the overall project delay. - **Path Forward with Dependencies:** The plan is to use the upcoming customer reports call to seek these answers. If the information is not received, a separate follow-up call will be necessary. The development of these reports is a key dependency for other project phases. ### Account Setup and Migration Status The migration of accounts from the legacy system (Data Services) to the new platform (UBM) is behind the previously communicated schedule, currently at only 31% completion. - **Current Progress Metrics:** Out of approximately 6,450 total bills, 2,018 accounts have been successfully migrated to UBM, and 4,344 have been processed through the account setup queue. A subset of these (3,235 non-EDI bills) is currently in an audit state. - **Revised Completion Date:** The estimated completion date for migrating all non-EDI accounts (including setup and mapping in UBM) was pushed back to December 6th. This acknowledges the current resource allocation and provides a more realistic buffer. - **Handling Final Bills and EDI:** A process question was raised regarding identifying "closed accounts" for reporting purposes, which depends on utilities correctly flagging final bills. Additionally, roughly 1,000 EDI-formatted bills are currently suspended, awaiting a separate green light and legal agreement (LOA) before their migration and outreach can begin. ### Legal and Banking Dependencies The project's timeline is intertwined with external legal and banking processes that are proceeding on separate tracks. - **Letter of Agreement (LOA) Urgency:** The finalization of a master LOA is a critical dependency, especially for initiating outreach on the suspended EDI accounts. While work is ongoing, it was noted that if the agreement slips into February, it could become a project risk. A call with legal counsel is scheduled to advance this. - **Banking Setup Confirmation:** For a separate workstream (FBO), banking documentation from a client has been submitted and confirmed. However, the actual bank account setup is not yet visible in the system, though this is not expected to be an issue. - **BAI File Sample:** A follow-up is required on a request for a BAI sample file from another team member, which remains an open item.
URA & Reporting Priorities
## **Ongoing Customer and Operational Challenges** The meeting opened with a discussion of persistent external and internal operational pressures. A significant, long-standing issue with a customer who has not paid a substantial debt for over a year was highlighted, exemplifying ongoing accounts receivable challenges. Furthermore, a recurring theme was the influx of complaints and data inconsistency issues from various teams, particularly as they conduct month-end reporting, which is putting a strain on resources. There is also concern about client retention, with specific mentions of threats from customers like "Ascension" and "Apex" potentially dropping services, though "Sheets" was noted as a more understanding long-term client. Operationally, there are questions about the efficiency and focus of the operator assigned to the "Sheets" account, with observations that the workload metrics may not reflect productive output. ## **Updates on Technical Projects and Processes** A primary focus was the status of several key technical initiatives aimed at improving data handling and reporting. The implementation of a new template for processing data is complete; however, the current effort is focused on cleaning up thousands of redundant line items in the build templates created over recent months, a task that requires careful validation to avoid deleting necessary data. Concurrently, credential management system updates are slated for completion within the week, which will provide a user interface for reviewing and updating access credentials. - **Template Implementation & Guardrails:** The new data ingestion template is in place. A major related task is the cleanup of accumulated build template line items, with work targeted for completion this week. A recognized risk is that the AI may sometimes ignore the provided template, so plans are being made to establish guardrails and a review process for such exceptions, though the high volume of data poses a challenge for manual oversight. - **Credential Management:** A UI for credential management is nearing completion and expected to be finalized within the week. ## **Prioritization of Analytical Reports** A significant portion of the conversation centered on prioritizing the development of several analytical reports to improve visibility and operational control. The "Arcadia Report" (internally named for tracking purposes) was given top priority as it is deemed essential for identifying missing bills-a critical gap in current processes. A "Missing Bills Report" is also needed and is considered a quicker development task. Additionally, there are two other dashboard requirements that require weekly maintenance. The approach moving forward is to formalize a prioritized list of all report development work to manage expectations and streamline the workflow for the data engineering team. - **Arcadia Report (Top Priority):** This report is considered the fundamental tool for tracking actual missing bills, addressing a core operational blind spot. - **Missing Bills Report:** This is identified as a simpler, quicker report to develop in parallel. - **Formalized Prioritization:** To manage the constant influx of reporting requests, a comprehensive and prioritized list of all required reports and dashboards will be circulated to align the team and set clear expectations for delivery timelines. ## **Internal Communication and Alignment Issues** Friction and misalignment within the broader organization were a notable topic. An incident was recounted where a manager, several levels removed, publicly expressed a lack of awareness about ongoing projects and releases, despite a recent comprehensive year-end review meeting that covered these exact topics. This has prompted a decision to institute more frequent, structured communication, such as a dedicated monthly meeting, to keep that part of the organization informed. The discussion also touched on a separate communication issue with another team ("Ura"), where key representatives have stopped attending meetings, leading to information silos. ## **Specific Action Items and Follow-ups** The meeting concluded with plans to address several specific pending items. Immediate follow-ups are required with data engineers "Ruben" and "Jay" regarding a "linking pairing" issue for a client, as a previous meeting was missed. Furthermore, an email from "Jennifer" regarding data cleaning for "Exponent" needs to be reviewed and addressed, with clarification needed on whether it relates to previously discussed work. The team is also working through a list of client accounts that need attention before the end of the month.
Weekly Priority Meeting
## **Summary** The meeting focused on three primary technical workstreams: clarifying data source columns for billing systems, reviewing development progress on a validation and filtering interface, and planning a bulk data ingestion process with considerations for system performance. ### **Clarification of Data Columns and Sources** A detailed discussion was held to identify and understand the specific data columns related to bill creation, particularly their sources and overlapping information. The goal was to establish clear definitions and fixed possibilities for these data points to ensure consistent handling across platforms. - **Analysis of key columns:** The conversation centered on columns like `created by user` and `bill source`, which are present in systems like FTG Connect and UBM but may not be uniformly displayed everywhere. Identifying all relevant columns and their possible values (e.g., web download, mail) is a prerequisite for systematic data work. - **Verification of data origin:** There was a point of clarification regarding whether the `bill source` data is set within the application's own data services or originates from the upstream UBM database, highlighting the need for technical verification to confirm the system of record. ### **Development Progress on Validation Page & Filters** Significant progress was reported on a validation page, with new filtering capabilities being implemented to manage and review records more effectively, though some data mapping challenges remain. - **Enhanced filtering interface:** Client and vendor dropdown filters have been added to the page, enabling users to filter DSS records based on these selections. The client filter is fully functional due to an existing mapping table between DS and UPM databases. - **Outstanding integration challenge:** A current blocker is the lack of a mapping table for vendors between the UBM and DSS databases, preventing the vendor filter from populating correctly with matched records. Development is ongoing to resolve this data linkage issue. - **Additional feature work:** A parallel task involves implementing record count displays for each filter option, providing immediate visibility into how many records are returned when a specific filter criterion is applied. ### **Deployment Strategy and Environment Updates** The team reviewed the state of various features across development and production environments, leading to a decision on pushing recent changes for broader testing and review. - **Assessment of lower environments:** It was confirmed that several features, like edit history and PDF/B list viewing, are already present in lower environments (e.g., Build Management), while other JavaScript components are only in production. - **Decision to deploy:** To enable hands-on review and testing of the new validation page features, a decision was made to push the current development changes to a lower environment (dev server). This will allow for practical evaluation and any necessary adjustments before further progression. ### **Planning for Bulk Data Ingestion** A major operational plan was formulated to execute a large-scale ingestion of historical billing data, with careful attention to system load and queue management to avoid performance degradation. - **Scale and execution:** The plan involves processing a substantial backlog of data, estimated at 700-800 items, with an understanding of the total account volume being around 3,000. The ingestion will be performed via a script with a controlled delay (e.g., 5 seconds) between uploads. - **Queue management strategy:** To prevent system overload, a detailed analysis of existing queue backlogs (Priority, Prepay) was conducted. Since current backlogs were minimal, the team decided to proceed with the ingestion while closely monitoring system metrics. - **Performance safeguards:** The ingestion will utilize a separate processing path that bypasses the main priority queues, uploading directly for processing. The team committed to real-time monitoring of queue throughput and system performance during the execution to swiftly address any potential spikes or issues, ensuring stable operation for other ongoing processes.
Bill Template Review
## **Overview of the Billing Data Download Workflow** The meeting focused on improving the efficiency and logic of the bill download and task management process within the Web Download Helper system. The primary goal is to reduce the significant amount of manual work currently required by the operations team to manage and snooze tasks for bills that are not yet available. Discussions centered on three main areas: enhancing the task snoozing logic, introducing dynamic expected posting dates, and reviewing updates to the invoice template system. ## **Purpose and Issues with the BDE Flag** The meeting clarified the intended use of the BDE flag and identified operational inefficiencies it was meant to address. The flag is designed to signal that an account's bill is expected to be pulled by a third-party service (BDE), not to automatically snooze tasks. However, a key problem was identified where operators still manually pull these bills, leading to duplicate work. The team discussed integrating a weekly report from BDE, which lists active accounts and their expected pull dates, to better synchronize expectations and task generation, moving beyond a one-size-fits-all snoozing logic. ## **Revamping Task Snoozing Logic & Workflow** The current snoozing functionality was deemed too static and labor-intensive, prompting a deep dive into potential solutions. The core issue is that the team spends excessive time checking portals for bills that systematically post later than the invoice date. - **Current snooze mechanism:** The system only allows a task to be hidden for a fixed two-day period before it repopulates, requiring repeated manual intervention. - **Dynamic date adjustment proposal:** The conversation shifted towards creating a system where an "expected posting date" can be defined, preventing tasks from appearing until the bill is actually available online. - **Challenges with cycle days:** Relying solely on cycle days (days from the service end date) is problematic because many vendors have delayed posting schedules or inconsistent billing cycles (e.g., quarterly bills, municipalities with fixed monthly posting dates). ## **Proposal for Vendor & Account-Level Expected Posting Dates** A two-tiered configuration system was proposed to automate task scheduling and drastically cut manual snoozing. This solution aims to set expectations systematically so the system "doesn't even bother looking" for a bill until it is likely to be posted. - **Vendor-level configuration:** For vendors where all accounts follow a uniform schedule (e.g., a municipality that posts all bills on the 15th of every month), a global expected posting date or delay could be set. This would apply as a default to all associated accounts. - **Account-level override:** For vendors with inconsistent billing cycles across different customers, the system would allow operators to set an expected date or cycle day override on specific accounts. This is crucial for vendors where one customer might be billed monthly and another quarterly. - **Implementation strategy:** The team acknowledged that populating these settings would be a gradual, manual process done over several months as bills are processed, rather than a massive upfront data migration. ## **Manual Reporting Process and Automation Potential** Significant time is spent on a manual weekly report to identify late bills, highlighting an area ripe for automation. This process currently involves complex filtering in a spreadsheet to isolate bills that are 35 days or older from their last invoice date while excluding those with future expected dates. - **Time consumption:** The manual report takes several hours to compile, representing a major efficiency drain. - **Automation goal:** Even if full automation isn't immediately feasible, the priority is to explore ways to reduce this task from hours to 30 minutes or less by leveraging or modifying existing system data like the outstanding web download report. ## **Updates to Invoice Template and Error Resolution** Progress on the invoice template system was demonstrated, and a specific billing error was reviewed to establish a resolution protocol. - **System enhancements:** Updates allow users to view the specific invoice template used for any processed bill directly from the bill's details, including templates with incorrect line items for reference. - **Handling "failed" bills:** The team examined a failed bill that lacked clear start/end dates because it was a print-out from a web portal. It was determined that for such cases, the bill date should be used as the service date. - **Workflow decision:** A decision was made that for bills failing pre-audit due to minor correctable issues (like missing dates), it is more efficient for the operations team to edit them directly within the system rather than having them routed through a separate ticket-based system, which would require re-entering all data.
Laserfische Service for Constellation Energy Bills
## Summary The meeting centered on designing a new, unified system for ingesting utility invoices, primarily focusing on Constellation electricity bills, with additional discussions on handling gas bills and resolving data issues with a specific client (Ascension). The goal is to create a streamlined, centralized service that avoids redundant requests to external teams and minimizes changes to existing data structures and customer relationship management (CRM) systems. ### Constellation Invoice Retrieval Strategy The core proposal involves leveraging an existing file storage container ("Navigator" style) to receive all Constellation customer bill PDFs and XMLs automatically. - **Centralized Container Approach:** Instead of each application team making separate requests, the plan is to ask the external data team (ADT) to replicate a storage container where all Constellation bills are deposited. This provides a single source of truth for applications like Data Services (DSS) and UBM to pull from. - **Minimizing External Team Impact:** A key objective is to present a comprehensive, unified request to the ADT ("Shooting team") to prevent them from receiving conflicting or piecemeal demands from different internal teams. This involves specifying the exact need: a container with appropriate permissions and a Shared Access Signature (SAS) token that grants full manipulation rights (read, write, delete). - **Proving Feasibility:** A similar setup is already functioning for a different purpose (carbon accounting/GHC container), which provides a working blueprint and suggests the implementation timeline could be relatively short (a few weeks). ### SRT System and LDC Mappings The conversation clarified how customer and account identification would work within the new flow, relying on the existing SRT (Service Request Tool) system. - **Customer Identification via CNE ID:** The Customer Network Entity (CNE) ID, obtainable from the CRM (HubSpot), serves as the primary customer identifier. This ID has a one-to-many relationship with Local Distribution Companies (LDCs). - **Account Specification via LDCs:** An individual customer (CNE ID) can have multiple utility accounts, each represented by a unique LDC number. The SRT tool can provide the full list of LDCs associated with a given CNE ID, which is necessary for pinpointing specific account bills within the storage container. - **Processing Latency Unknown:** A remaining open question is the latency of the file replication job-how frequently new bills are copied into the container-which requires further testing by attempting to drop a file and observing the sync time. ### File Handling and Proposed Storage Solution The discussion covered how applications would access the bills from the central container and proposed an intermediate storage layer for flexibility. - **Intermediate S3 Bucket Proposal:** As a short-term solution, a process would fetch files from the Constellation container and drop a copy into an internal S3-compatible bucket. This provides a more accessible and controllable API endpoint for internal applications. - **Enhanced Control and Traceability:** Using an internal bucket allows for adding event tracing, logging, and metadata tagging (e.g., LDC, date). Applications could call an API to list available files for a given LDC and date range before retrieving the specific PDF or XML, improving reliability and auditability. - **Managing File Lifecycle:** Questions about automatic "soft deletes" in the source container and how to handle file versioning were acknowledged as important details that need to be understood and configured within the new system's logic. ### Gas Bill Ingestion Challenges Extending the solution to gas bills presents significant technical hurdles due to different source systems and access limitations. - **Separate Source System (RBS/Genesis):** Gas invoices are held in a separate system (RBS/Genesis) managed by a different team (Laserfish), not in the same pipeline as electricity bills. - **Access and Network Barriers:** Current access methods are problematic. One method provides a SAS token that expires weekly and requires manual renewal, while another (Laserfish) is inaccessible from outside the corporate network, rendering it useless for an automated external service. - **Longer-Term Integration Question:** The fundamental challenge is whether the gas bill data can be integrated into the same logical flow. One potential solution explored is requesting that the Laserfish team deposit gas bills into the same centralized container, creating a unified "Constellation bills" stream, regardless of utility type. ### Ascension Data Cleanup and Date Filtering An immediate, practical issue was addressed concerning a specific client's data, which contained duplicate and outdated bill files. - **Duplicate and Old Bills:** The dataset received for the Ascension client included bills dating back to February and March, leading to unexpectedly large file sizes and duplicate ingestion efforts. - **Plan for Filtering:** The immediate action plan is to use a provided script to parse the accompanying JSON metadata files (which correspond to each PDF) to identify and exclude all bills dated prior to January 1st. This will create a clean list of only the relevant, recent PDFs to ingest. - **Broader Date Definition Issue:** This problem highlighted the ongoing, critical challenge of standardizing date parameters (e.g., invoice date, statement date, service period) across different systems and teams to prevent similar issues in future automated retrievals.
Jay <> Faisal
## Customer The customer is a strategic partner operating in the energy or facilities management sector, using the platform for comprehensive utility bill management. This client, referenced as Hexpo (or Hexable), manages multiple entities under their portfolio, including St. Elizabeth and Munson. Their role involves accruing costs, analyzing usage, and generating consolidated reports from utility data across numerous locations and accounts. Their background suggests a sophisticated, data-driven operation where billing accuracy and report integrity are critical for financial operations and sustainability reporting. ## Success The customer's most significant success with the platform lies in their proactive and detailed internal audit capabilities. Faced with pervasive data inconsistencies, their team undertook a substantial manual effort to identify discrepancies. They created extensive, customized reports and spreadsheets that meticulously flagged issues the platform failed to catch automatically, such as incorrect account linking and implausible usage figures (e.g., unit of measure errors). This self-driven diagnostics work was crucial in highlighting the severity and specific nature of the platform's data integrity problems, effectively prioritizing the most critical fixes and preventing complete operational breakdown in their reporting. ## Challenge The single biggest challenge is the platform's persistent and systemic **data linking and integrity issue**. This core problem manifests in several critical ways: - **Account Linking Errors:** Virtual accounts (representing utility services like distribution, supply, or full service) are not correctly linked to their physical locations or to each other when they should be. This results in split bills, duplicated accounts, and invoices posted to incorrect entities, making consolidated reporting impossible. - **Propagation of Bad Data:** The decision to turn off automated validation checks in the platform months ago allowed errors to be "auto-resolved" without human review. This created a massive backlog of potentially incorrect bills where it's now impossible to distinguish between data that was manually corrected and data that is fundamentally flawed. - **Impact on Core Functionality:** These issues directly undermine the primary value proposition of the platform. Customers cannot trust the accrual reports, usage analytics, or carbon reporting, rendering key features unusable and threatening contract renewals. The problem is compounded because bulk-fixing tools often fail when accounts are already incorrectly linked, forcing highly manual and unsustainable correction processes. ## Goals The customer's primary objectives in using the platform and engaging with the support team are clear: 1. **Achieve Data Accuracy:** To have all utility bills correctly linked and accurately reflected in the platform, ensuring that cost accruals and usage data are reliable for financial and operational decision-making. 2. **Restore Reporting Confidence:** To be able to utilize the platform's analytics and reporting features without manual workarounds or external spreadsheets, trusting that the data presented is a single source of truth. 3. **Implement a Sustainable Solution:** To move beyond one-off, manual cleanup efforts. The customer needs a systemic fix-whether through new platform logic, templates, or enhanced validation-that prevents these linking and data integrity issues from recurring with new invoice processing. 4. **Ensure Platform Stability for Contract Renewal:** To resolve these critical issues to a satisfactory level before key contract renewal dates, as the current state poses a direct threat to the continued business relationship.
UBM Reporting - Automate for Internal MGT & Client Facing
## **Summary** The meeting centered on critical operational and reporting challenges within the utility bill management system, focusing on the need for a single, reliable source of truth for tracking bills, resolving systemic data issues, and developing more advanced analytical capabilities for clients. ### **Establishing a Reliable Bill Tracking Methodology** The core problem identified was the lack of a unified, clear view to determine whether all expected utility bills have been received and processed. Different reports (e.g., "Late Arriving Bills" vs. a "Missing Bills" report) create confusion. - **Defining a clear tracking view:** The discussion emphasized moving away from ambiguous "lateness" metrics based on unreliable expected dates. Instead, a simpler "have/don't have" status check for each account was proposed, potentially categorized by billing cycle, which is often published in advance by utilities and is more stable. - **Need for a single source of truth:** A significant challenge is data fragmentation across multiple systems (Data Services, UVM) and intake sources (Arcadia, BDE, web downloads, mail). The team stressed the necessity of one consolidated list that defines the 100% of accounts expected, against which all incoming data can be measured, to know precisely what is missing and from which source. - **Communication of systemic issues:** The group noted that third-party data providers like Arcadia may have statuses for utility billing outages, but this communication needs to be integrated into the internal tracking process to avoid unnecessary follow-up on accounts where delays are known. ### **Addressing Operational Data Integrity Hurdles** Several ongoing operational issues directly impact the accuracy of reporting and client trust, primarily concerning duplicate bills and data synchronization. - **The duplicate bill crisis:** A major operational burden was highlighted, where thousands of duplicate bills are processed weekly. This is exacerbated by changes of address (where web downloads and paper mail run concurrently) and failures in the duplicate resolution system, particularly when bills lack a clear invoice date. - **Impact on client confidence:** Passing duplicates through to the client's view undermines the promised validation checks and forces manual remediation, creating a significant credibility issue. - **Discrepancy management:** There remains a reporting gap and discrepancies between what is present in the Data Services (DS) platform versus what appears in the UVM system, both for accounts and credentials. This gap prevents a clear picture of data flow and completeness. ### **Developing Advanced Reporting for Energy Management** The conversation shifted to the need for the platform to evolve beyond basic bill pay into a true energy management tool, identifying a key reporting gap. - **The critical need for a Rolling 12 report:** A standard report in the industry, the Rolling 12-month view, is currently difficult or impossible to generate within the platform for a custom fiscal year. Clients and internal energy managers are forced to manually compile multiple data exports. - **Value of advanced analytics:** This type of report is essential for finance teams to understand run-rate, compare year-over-year performance, track against budget, and identify consumption anomalies that could signal equipment issues or energy efficiency opportunities. Its absence limits the platform's value proposition and forces clients to perform analysis externally. - **Expanding use for CES:** The Commercial Energy Services (CES) team requires these analytical capabilities to proactively identify savings opportunities for clients, aligning the platform's functionality with the broader value sold during the sales process.
Quarterly Delivery Check-in
## **Reports and Data Discrepancies** The meeting began with a focus on several reports, particularly those being prepared for January. There are ongoing efforts to align data between different systems, with specific discrepancies identified between data pulled from a platform called PayClearly and what is being displayed in Power BI reports. An investigation is planned to resolve these mismatches, which could be due to timing issues in data refreshes or missing data fields. - **Addressing data alignment issues:** A key task is to reconcile summary reports created from PayClearly exports with the views available in PowerBI. A meeting was scheduled to analyze specific examples of these discrepancies to determine if they are caused by timing delays in API data refreshes or by fundamentally missing information. ## **Credential Management and Validation** Progress is being made on a credential management project, which is a key multi-year initiative. The core credential management component is targeted for completion within the next week, to be followed shortly by a validation piece. This work is foundational for onboarding customers to a platform named Arcadia. - **Upcoming completion and ownership:** The credential management system itself is on track for completion imminently, with the validation component to follow about a week later. Once finished, ownership of the validation process will be transitioned to another team, and customers will be onboarded to Arcadia in a phased, one-by-one manner. ## **System Status Report and Upcoming Reporting Tasks** The system status report is considered complete from a development standpoint, though it requires feedback and likely some adjustments. A broader list of approximately nine other reports is in the pipeline, encompassing both new reports and adjustments to existing ones. - **Prioritizing a queue of reports:** A comprehensive list of all pending report requests will be circulated internally to confirm priorities and requirements. This will help the team sequence the work effectively alongside other quarterly goals. ## **MFA Strategy and Arcadia Integration** A significant discussion centered on the strategy for Multi-Factor Authentication (MFA). There is an active evaluation of whether to invest in a new MFA solution or to rely on the capabilities provided by the Arcadia platform as more services migrate there. - **Evaluating the need for a new MFA solution:** The team is assessing whether procuring a separate MFA technology (to move away from using personal phones) is still necessary if most systems are transitioning to Arcadia, which has its own MFA system. This decision hinges on understanding the timeline and scope of the migration. - **Understanding the current MFA landscape:** As part of this evaluation, the team needs to build a complete inventory of which vendors or systems currently require MFA and where those authentication assets (like phone numbers) are assigned today. This will clarify the scale of the problem for any remaining non-Arcadia systems. ## **Quarterly Roadmap and Upcoming Priorities** The conversation reviewed the Q1 roadmap to ensure the team remains on track, with an emphasis on completing January deliverables to maintain momentum. Several projects for February and March were previewed to allow for advance planning. - **Maintaining Q1 momentum:** The team expressed confidence in completing most planned Q1 work but acknowledged the tight timeline. The goal is to avoid slippage by completing January tasks promptly, as the schedule for subsequent months is already full. - **Key initiatives for February:** February's focus includes data validation errors, credential management tasks, a review of rules, and the addition of a holistic account health dashboard in Power BI. The MFA strategy discussion is also a key February milestone. - **Potential scope reduction for future projects:** Two upcoming initiatives-an "interpreter layer" and phases of "integrity checks"-may be significantly reduced in scope or eliminated entirely. Changes to the build template and the Arcadia integration could make these efforts redundant. The "review of rules" project may also help address the integrity check requirements.
Plan for Arcadia
## **Summary** This meeting focused on critical operational and technical challenges related to system integrations, data consistency, and reporting backlogs. The primary goal was to advance the integration with **Arcadia** for automated bill processing while simultaneously addressing systemic issues in credential management and the overwhelming demand for new reports. ### **Progress and Plan for Arcadia Integration** The immediate plan is to complete the first major ingestion of data from Arcadia, with processing expected to begin the following day. The initial test involves a 150MB data batch to validate the processing scripts before handling the full volume. The integration aims to automate the ingestion of billing PDFs, moving away from manual downloads, but the immediate focus is on ensuring the technical pipeline works correctly before scaling. - **Initial Batch Processing:** The team plans to start with a secondary data batch to test the script's functionality, as the first batch requires filename adjustments. Success with this test will confirm the pipeline is ready for full-scale operation. - **Addressing Duplicates and Failures:** Initial data from Arcadia has already been processed, resulting in some duplicate entries and expected system failures (e.g., bills with invalid formats or currencies). These are considered normal outcomes of integrating a new data source and will be handled through existing operational workflows. - **Long-Term Source Identification:** A crucial next step is to formally tag all data sourced from Arcadia within the system. While the origin can currently be traced in the backend, this needs to be surfaced as a filterable attribute for operations and reporting clarity. ### **Identifying and Classifying Potential System Failure Points** A significant part of the discussion centered on proactively defining categories for potential failures in the Arcadia pipeline. The objective is to correctly attribute issues to avoid incorrectly blaming the new integration and to establish clear troubleshooting workflows. - **Credential Mismatches:** A primary anticipated issue involves discrepancies between account credentials stored in different systems (UBM vs. DS). This could prevent Arcadia from successfully accessing bill data. - **Vendor Mapping Errors:** Failures may occur if the identifiers used by Arcadia do not correctly map to internal vendor IDs, as was a challenge with the Ascension account. - **Establishing Expected Behavior:** The team emphasized the need to define what "correct" processing looks like to effectively identify anomalies. The plan is to establish four or five key "buckets" or categories of issues to monitor over the first 30 days of the integration. ### **Developing a Centralized Credential Management Solution** To resolve the root cause of credential-related failures, a new internal tool is being developed. This system aims to become the single source of truth for account credentials, eliminating current conflicts and manual processes. - **Two-Component System:** The planned solution will feature a **Credential Validation** page that flags discrepancies (e.g., credentials existing in one system but not the other, or mismatched values) and a **Credential Management** page for making corrections. - **Ownership and Workflow:** Clear ownership must be assigned for resolving validation alerts. The resolution process will involve choosing which system's data is correct, marking accounts inactive, or updating credentials directly. - **Data Synchronization Challenge:** A major technical hurdle is ensuring any change made in the new tool is synchronized bidirectionally with the legacy DS and UBM systems to prevent the same discrepancies from reappearing. The long-term goal is to have all systems point to a single, logged credential table. ### **Managing the Reporting Backlog and Data Discrepancies** The proliferation of ad-hoc report requests is consuming excessive resources. The team decided on a strategy to halt new requests after fulfilling the current backlog to focus on fixing the underlying data issues the reports reveal. - **Consolidating Requests:** Nine specific reports are currently in the queue, including updates to the Combined Lab report, a new Arcadia-specific report, and requests from multiple stakeholders (Tim, Gary, Mary, Afton). - **Communication and Moratorium Plan:** An email will be sent to all stakeholders listing all active and pending report requests to create visibility into the total workload. The intent is to deliver these nine reports and then institute a three-month pause on new requests to dedicate effort to solving the core problems the reports highlight. - **Data Integrity Issue:** An immediate inconsistency was identified where the count of "late billing" data differs between the UBM and DS Power BI reports, despite supposedly drawing from the same sources. This discrepancy requires investigation to ensure report accuracy. ### **Miscellaneous Operational Items** Two ancillary topics were briefly addressed concerning internal processes and compliance. - **Disaster Recovery (DR) Testing:** The team is coordinating a mandatory DR test but is pushing back on testing the third-party Auth0 integration, arguing that an outage of this external service is beyond their control to remediate. The focus will be on testing internal application failover. - **Vendor Support List:** For operational clarity, a list needs to be maintained and exposed internally showing which vendors are supported by Arcadia versus those that will always require manual bill retrieval. This will help route tasks correctly within the operations team.
Ascension Operations Update
## Summary The meeting primarily focused on the operational status and ongoing challenges with the Ascension account, covering progress on electronic payment adoption, data integration with a new platform, and strategies for addressing client concerns. A secondary discussion involved similar issues with the Victra account, using it as a comparative benchmark. The core objective was to align on messaging and next steps for client communication and internal process improvement. ### Electronic Payment Conversion Progress for Ascension Significant progress has been made in converting payments from paper checks to electronic ACH transfers, though a notable batch of exceptions remains. - **Substantial electronic payment adoption:** For a recent payment cycle, 2,666 out of 3,020 accounts were paid electronically, representing nearly 90% adoption. This is framed as strong progress. - **Targeted resolution for remaining checks:** The 354 accounts that still required checks are being actively investigated. A process is underway to pull credentials for these specific accounts to identify which ones simply need credential updates versus those with more complex issues. - **Structured follow-up process:** For accounts lacking credentials, a team member will investigate the individual payment portals to determine the necessary information for conversion. This information will then be emailed individually to the client and loaded into the system attributes to prevent future issues. ### Unresolved Account and Credential Issues Beyond the bulk conversion effort, several specific operational hurdles were identified that are causing recurring problems and client questions. - **The "51 Accounts" issue:** A backlog of approximately 51 accounts requiring action is slowly decreasing. The delay is attributed partly to cryptic or outdated notes in the system, which create confusion on required actions, and partly to utility response times that are beyond direct control. - **Uncontrolled Virtual Account Creation:** A technical issue where the system creates new virtual accounts for existing customers if any data field (like meter or service type) varies between billing cycles. This makes it appear that accounts are missing or unlinked. - **Operational solution in place:** The current fix requires manual intervention by operators to ensure new virtual accounts are properly linked and have all necessary attributes copied over before processing. Two operators are currently dedicated to managing this queue for Ascension. ### Arcadia Integration and Data Migration Status The team is actively testing and implementing Arcadia as a new platform for managing Ascension account data and credentials, running it in parallel with existing processes. - **Credential testing phase:** Out of 2,310 credentials submitted to Arcadia, the platform is actively testing them. A known set of 18 vendors (affecting roughly 100 accounts) have consistently failing credentials that require cleanup. - **Extensive account data loaded:** The Arcadia dashboard shows data for 6,930 account-provider combinations, which includes all historical accounts found under the submitted credentials, not just active ones. - **Parallel processing established:** PDF bills from Ascension are being successfully pulled into the new system. The team is building a foundation for continuous, automated ingestion, with a full test batch scheduled for processing over a weekend to gauge system performance and timing. ### Broader Operational and Client Relationship Challenges Discussions extended to systemic issues impacting client satisfaction and the justification for current operational constraints. - **Handling client concerns about late fees and checks:** A strategy was discussed for responding to frequent client emails about late fees and disconnection notices. The proposed response is to consistently explain the root cause: bills are often pulled after the due date due to process timing, making lateness unavoidable. The focus should be on sharing the progress made on electronic conversion. - **Explanation for payment method inconsistency:** For cases where an account pays electronically one month but by check the next, reasons can include portal payment limits (requiring a split payment), credit balances, or missing credentials at the time of processing. Each case is investigated individually. - **Comparative performance:** It was noted that the current ~90% electronic payment rate for Ascension is considered strong, especially when compared to the historical performance of the previous provider, who did not use customer portals and likely had a higher check volume. ### Path Forward and Client Communication Strategy The meeting concluded with alignment on next steps and how to communicate progress and plans to the client. - **Cautious timeline for Arcadia transition:** While work is advanced, the team agreed not to commit to a firm, short-term deadline for fully transitioning Ascension to Arcadia. Instead, a runway of 30-60 days was suggested to work through challenges. - **Transparent and proactive communication:** The consensus was to maintain transparency with the primary client contact, providing detailed updates on electronic payment progress, the plan for residual check issues, and the work on Arcadia integration to build comfort and preempt internal pressures they may be facing. - **Decision point on manual processing:** A pending decision was noted regarding whether to manually process any new, urgent Ascension invoices outside of the new system during the transition, with a review planned for the following Monday based on the weekend's system test results.
UBM Planning
## Summary The meeting centered on resolving critical discrepancies in late bill reporting data between systems, establishing dashboard priorities, addressing customer billing requests, and managing urgent client payment file terminations. ### Discrepancies in Late Bill Reports A significant discrepancy was identified between the number of late accounts reported in different systems, requiring immediate investigation and alignment. - **Data Mismatch Investigation:** The core issue is a major variance in the count of late bills for a specific client ("extension") between the "Data Services" system (showing 93 accounts) and the "UBM" system (reporting approximately 500 accounts). This large gap suggests potential problems with data synchronization, duplicate records, or differing definitions of "late." - **Clarifying Report Logic:** The report in question flags an invoice as "late" if it is not received by a calculated "expected date," which is based on the prior invoice cycle. The team needs to ensure this logic is consistently applied across all data sources. - **Defining Key Metrics:** There was a specific discussion about clarifying the "last payment date" field to avoid customer confusion. The team agreed it currently reflects when a payment was marked for processing in UBM, not necessarily when it cleared the bank, which is a crucial distinction for customer communications. - **Next Steps for Resolution:** The immediate action is to share and compare exported data sets from both systems to pinpoint the root cause of the discrepancy. The goal is to have a single, reliable source of truth for late account reporting. ### Prioritization of Dashboard Development The development of new reporting dashboards was discussed, with a clear directive on which project should receive immediate focus. - **Template-Based Dashboard as Priority:** The highest priority is given to building a dashboard that visualizes the metrics and calculations currently tracked in a specific Excel template. This dashboard will standardize the weekly reporting process for late bills and related metrics. - **Strategic Rationale for Priority:** This project was prioritized over other dashboard work because it directly supports the most critical and time-sensitive operational reviews, where data accuracy is currently under scrutiny. ### Customer Billing Requests and System Updates The conversation touched on standard customer service requests and the systems required to fulfill them. - **Paperless Billing Requirements:** A customer request to switch to paperless billing was reviewed. It was clarified that for customers to successfully go paperless, they typically need an active "Energy Manager" or "CEP" account, not just a profile in the UBM billing system. - **Account Cleanup Tasks:** Related work is in progress to clean up and reconcile customer account data for another client ("Ascension"), with a target to complete this work on the same day. ### Urgent Client Payment File Terminations An urgent operational issue was raised regarding several clients demanding an immediate stop to payment file generation. - **Immediate Deadline Pressure:** One client ("Facial") has demanded that payment files stop being generated, initially requesting an immediate halt. A compromise was reached to process any final transactions and then cease files by the end of the week. Other clients are expected to follow with similar requests. - **Operational Coordination Needs:** This creates a tight timeline requiring coordination between teams to: 1) clear any pending transactions in the system, 2) formally stop the automated file generation, and 3) allow the payment processor ("PayClearly") time to administratively close the linked bank accounts. The team emphasized the need for clear, confirmed deadlines from clients to execute this process smoothly. - **Contractual Implications:** The urgency is partly driven by a separate client ("URA") that has threatened contract termination if certain issues are not resolved by the end of the month, highlighting the broader business risk associated with billing and payment system problems.
Credential Management Plan
## Overview of the EPIC and Development Approach The discussion centers on a new EPIC focused on data credential management, introduced for collaborative review and planning rather than as a finalized specification. The intent is to adopt an iterative and interactive development process, moving away from a linear "finish then review" model. This approach is designed to allow for continuous feedback and adjustments as the work progresses, ensuring alignment and addressing any overlooked requirements early on. ## Consolidating Credential Data from Multiple Sources A foundational requirement for the system is the ability to aggregate credential information from two distinct backend services: UBM and Data Services (DS). The architecture must be capable of pulling data from both systems to create a unified view for management and validation purposes. This integration is the first critical step in building a comprehensive credential management platform. ## Core Credential Comparison and Matching Logic The central operational logic involves comparing the credentials retrieved from the two source systems. The outcome of this comparison dictates the subsequent workflow: - **Matching Credentials:** When credentials from UBM and DS are identical, they should be automatically migrated to a new, designated table within the credential management interface. This signifies they are verified and ready for active management. - **Non-Matching Credentials:** Credentials that do not match between the two sources must remain within the credential validation queue. This discrepancy flags them for manual intervention and resolution. ## Managing Discrepancies and Required User Actions For credentials that fail the automatic matching process, a clear manual action workflow is necessary. Authorized personnel must be able to review each discrepancy and determine the correct resolution from a set of predefined actions: - **Validate Correct Source:** The user must decide whether the UBM credential or the DS credential is the correct and authoritative record. - **Invalidate and Remove:** The user should have the option to mark a credential as obsolete or incorrect and delete it entirely from the system. - **Other Remedial Actions:** The system should accommodate additional resolution paths as needed to handle edge cases or complex discrepancies. ## Synchronizing Changes Back to Source Systems A crucial final component of the workflow is bidirectional synchronization. After actions are taken within the management interface-such as validating a correct record or deleting an invalid one-these changes must be reliably propagated back to the respective original applications (UBM and DS). This ensures data consistency across the entire ecosystem and closes the loop on the credential management lifecycle.
Prep for DSS Improvements
## **Overview of the File Processing Project** The meeting centered on advancing a critical project to automate the ingestion of vendor invoice PDFs (specifically for Ascension) into the system. The immediate goal was to finalize and deploy a script that would rename and prepare approximately 1800 files for processing. A key dependency involved providing clear instructions and necessary data mappings to an external team member, Priyanka, to ensure files were correctly renamed before ingestion. The broader objective was to establish a standardized, automated pipeline that leverages existing invoice templates for efficient data extraction, moving away from manual and generic processing methods. ## **Naming Convention and Data Mapping Requirements** A core focus was defining and communicating the exact naming convention for the PDF files to enable automatic matching and processing. ### **Defining the Essential Data Points** The file names must encapsulate three critical pieces of information to be successfully matched against the internal database: the Client ID, Vendor ID, and Account Number. The **Client ID** for Ascension is a fixed, hard-coded value. The **Vendor ID** is unique to each vendor within the client's system, and Priyanka was confirmed to have access to this mapping from a previously shared spreadsheet. The **Account Number** must be the *non-cleaned* version, including spaces and dashes as they appear on the original documents, to ensure accurate matching even if the source system removes special characters. ### **Resolving Mapping and Access Issues** There was significant discussion around sourcing the correct **Vendor Code**, which is distinct from the Vendor ID and specific to the internal system (DS). It was determined that Priyanka likely does not have this information. The action was to export a list of existing vendor codes and IDs for Ascension and provide it to her. This list would serve as the authoritative mapping, ensuring she renames files using the correct identifiers from the internal system, not from intermediary Excel files. ### **Handling Temporal Data and Duplicates** The team anticipated a future challenge: multiple invoice files for the same account over time (e.g., from different months). To prevent overwrites and maintain a clear audit trail, it was agreed that **a timestamp or date must be incorporated into the file name or folder structure**. This ensures each unique bill version can be stored and processed independently. ## **File Transfer and Infrastructure Considerations** The current method of transferring large ZIP files (several gigabytes) via email was identified as a bottleneck. - **Current Process Limitations:** Downloading and re-uploading these large files is time-consuming and prone to delays, especially with potential VPN or bandwidth issues. This creates a risk if files need to be re-sent with corrected names overnight. - **Exploring Alternatives:** Solutions like using a dedicated **SharePoint folder** or setting up an **FTP location** were briefly discussed. These would allow for direct, programmatic file access and eliminate the manual email step. - **Decision for Immediate Action:** For the immediate task of processing the 1800 files, the team decided to continue with the email method to avoid investing time in new infrastructure. However, the need for a more robust, automated file transfer solution was acknowledged as a future improvement. ## **Script Development and Validation Logic** The development and logic of the processing script were scrutinized to ensure robust and correct operation. ### **Core Script Functionality** The script's primary task is to **rename the downloaded PDF files according to the established naming convention** and prepare them for upload into the DSS platform. There was a decision to potentially share the script directly with Priyanka for execution, though finalizing this was planned for the next day. ### **Implementing Critical Validation Guardrails** A paramount requirement was added to the script's logic: **strict validation before ingestion**. The script must check if the parsed Client ID, Vendor ID, and Account Number from a file name have a perfect match in the internal database. - **Successful Match:** Files that match proceed to the optimized ingestion flow where a specific invoice template is used to guide an LLM in data extraction. This is the desired, efficient path. - **Failed Match:** Files that do not match **must not be processed at all**. They should be logged and set aside. This prevents files from falling into a generic, manual review queue, which would create more work downstream and defeat the purpose of standardization. ### **Proactive Testing with Sample Data** To ensure the mapping logic works, the team agreed to test the script against a **sample of 2-3 actual PDFs** before full-scale execution. This would verify that the naming convention correctly extracts the necessary identifiers. ## **Future Process Enhancements and Metadata Tracking** Looking beyond the immediate task, the conversation touched on necessary enhancements for a sustainable, long-term process. ### **Managing Account Status Post-Upload** A follow-up step after uploading bills is to temporarily **"snooze" or pause the corresponding client accounts** for a few days. This is a precautionary measure to prevent duplicate processing until the team is confident the correct bills (e.g., the most recent by date) have been fully ingested. ### **The Imperative for Comprehensive Logging** The team emphasized the need for the script to generate **detailed metadata logs**. Each file processing event should record when it was fetched from the source (Arcadia), renamed, and ingested. - **Purpose of Logging:** This creates an audit trail and history for each invoice, moving away from relying on email or chat messages to reconstruct events. It allows the team to know precisely which data came from which export job and when. - **Enabling Advanced Operations:** Reliable metadata is foundational for future capabilities like **duplicate detection** and understanding the chronological sequence of bills for the same account, which is vital for accurate financial tracking and reporting.
AP File Standardization & Banyan
## **Customer** The customer is a significant strategic partner, referred to as Simon, operating with complex financial systems. Their background involves a legacy relationship with a previous provider (Angie) and an internal reliance on specialized knowledge for Accounts Payable (AP) file generation and Enterprise Resource Planning (ERP) integration. The unexpected passing of a key internal engineer, Tom Barrett, who possessed critical, undocumented knowledge of their AP file specifications, has created a major internal knowledge gap and stalled their ability to provide necessary requirements. ## **Success** While the immediate context is focused on an active implementation challenge, the underlying product or service is designed to provide a critical financial consolidation and payment solution. The core value proposition is the ability to seamlessly pull funds from a designated bank account, pay bills on the customer's behalf, and deliver detailed AP files that integrate directly into their ERP system. This provides them with full financial visibility and accountability, transforming a simple bank withdrawal into a detailed, auditable record of exactly "who was paid what and when." ## **Challenge** The single biggest challenge is a critical and escalating delay in obtaining the precise technical specifications required to build the AP data files. This blockage stems directly from the loss of the key internal expert and has been compounded by repeated meeting cancellations, rescheduling, and a lack of empowered personnel on the customer's side to provide answers. This delay has compressed the project timeline to a critical point, jeopardizing the agreed "go-live" date. Furthermore, there is a recognized sensitivity around the transition from their former provider, adding layers of complexity to aligning deliverables and timelines. ## **Goals** The customer's primary goals are clear and time-sensitive: - To successfully launch the service and meet the April 1st (March 31st) deadline for the new payment cycle. - To receive accurate and customized AP files that can be ingested into their ERP system, providing the necessary financial reporting and audit trail. - To manage a smooth transition away from their previous provider, Angie, which may involve navigating a termination notice period. - To establish a reliable and accountable process for bill payments and financial consolidation through the new service.
Discuss Onboarding tracking
## Summary ### Onboarding Process and Timeline Expectations The meeting focused heavily on reviewing and standardizing the customer onboarding process, with particular attention on timelines and required documentation. A key goal is to establish a clear and consistent **90-day rule for invoices**, ensuring that no invoice processed during onboarding is older than this threshold. The discussion revealed that while the current average onboarding time ranges from three to five months, there is an aspirational goal to reduce this to a **90-day average**. This duration includes customer dependencies like scheduling kickoff calls and gathering documentation, which can cause significant delays. - **Invoice Age Limitation:** A strict policy is being implemented where any invoice received that is over 90 days old cannot be processed. This necessitates close collaboration with the sales team to set correct expectations from the contract stage. - **Onboarding Timeline Analysis:** The team will create a detailed timetable to analyze how long it takes to onboard every 500 accounts. Historical data suggests past timelines were four to six months, partly due to prioritization of other projects. - **Tracking Customer-Caused Delays:** A significant pain point identified is the inability to easily track and attribute timeline extensions to client-side delays (e.g., rescheduling, slow responses). The team discussed leveraging HubSpot or existing CSM pipelines to log these reasons systematically, creating a defensible audit trail. ### Invoice Data Management and Cleanup A major operational challenge discussed is the management and cleanup of account and invoice data across platforms (UBM and Data Services). Discrepancies and duplicate accounts are causing inaccuracies in critical reports like the missing bills report. - **Simplifying the Missing Bills Report:** The plan is to create a simplified, actionable report that lists accounts, their last actual bill date, and excludes mock bills, moving away from overly complex calculations that have caused issues. - **Systematic Account Cleanup:** A weekly review process is required to clean up duplicate and erroneous accounts. This is not a one-time fix but an ongoing operational necessity. - **Preventing Future Data Issues:** For new bills processed through DSS, there will be an effort to tag entries created by the system. This will allow for periodic bulk cleanup of any erroneous lines it generates, acting as a stopgap until more robust validation guardrails are built into the system. ### Credential Management and Platform Integration The team is working towards a "single source of truth" for customer credentials, which are currently stored in multiple disparate systems. This is part of a broader effort to standardize and link data across platforms. - **Centralizing Credentials:** A primary action item is to chart all current locations where credentials are stored and work towards consolidating them into one authoritative system. - **Standardizing Account Linking:** The process for linking billing accounts and locations between Data Services and UBM is being reviewed to ensure consistency and accuracy. - **Automation on the Horizon:** Future automation efforts are planned to extract invoice data via scanning technology, reducing manual entry and speeding up the onboarding process. ### HubSpot Implementation for Ticket Management The rollout of HubSpot as a centralized issue-tracking system is imminent, marking a shift away from managing requests and problems via email. - **Internal Rollout First:** The system will be launched internally for the team for a testing period of two weeks to a month before being made available to customers. - **Structured Workflow:** The goal is to have all internal and eventually customer-issued tickets flow through HubSpot, improving visibility, tracking, and ensuring no request is lost. Customer Support Managers (CSMs) will be the primary point of contact for tickets. - **Training and Materials:** Training is being organized, and the creation of instructional videos is in progress to facilitate adoption across the team. ### Standardizing Customer Communication and Requests The team identified a need to clean up and standardize communication channels, particularly email distribution lists, and to enforce consistent processes for customer requests like adding new locations. - **Distribution List Cleanup:** Multiple overlapping distribution lists (e.g., for CS, operations) have become cluttered. An action is in place to audit all lists, remove inactive members, and define clear purposes for each to prevent emails from being ignored. - **Process for New Locations/Accounts:** A standardized procedure was reiterated: customers must work with their CSM to add new locations to a central tracking document (e.g., a SmartSheet), the CSM must then add it to UBM, and finally tag the data services team to enroll it. This ensures billing, credential setup, and platform updates happen in sync. - **Reinforcing Official Channels:** The team plans to reinforce with clients the mandatory use of official distribution lists or the future HubSpot ticket system for all requests, especially urgent matters like disconnections, to ensure they are seen and acted upon promptly. ### SmartSheet Usage and Process Standardization A significant portion of the meeting debated the role and format of SmartSheets used in customer onboarding and account management. The current inconsistent use is creating operational inefficiencies. - **Inconsistent Implementation:** Currently, SmartSheets are used differently by each CSM and customer. Some clients actively use them, while others refuse, and the sheet format often gets customized per client request, breaking standardization. - **Desire for Integration and Enforcement:** There is a strong desire to lock down a standard SmartSheet format and, ideally, integrate it directly within the UBM platform. This would make it the single, live source of truth for billing account lists and attributes, visible to all stakeholders without manual cross-referencing. - **Shifting the Paradigm:** The discussion highlighted a need to move from a flexible, service-oriented model to a more productized one. The team recognized the inefficiency of constantly adapting to unique client requests and the need to establish and enforce standard operating procedures for data exchange.
Ad-hoc meeting
Simon Properties Onboarding Plan - Internal *** High Importance ***
## Customer The customer is a large enterprise managing utility accounts, likely in the energy or facilities management sector. They have transitioned from a previous bill pay supplier and are now engaged in a complex onboarding process involving thousands of utility accounts. Their background indicates a need for robust, automated solutions for invoice processing, bill payment, and data reporting to manage a high volume of transactions across multiple utilities and locations. ## Success The most significant achievement highlighted is the successful integration and processing capability for a high volume of utility invoices through the platform. A key success is the operational setup for **bill pay functionality**, particularly the handoff to a payment processor. The system demonstrates the ability to map accounts, send payment files, and facilitate electronic payments, which is foundational to the customer's core operational efficiency. Furthermore, the discussion around creating **standardized AP file templates** points to progress in automating financial reporting, which is a major value driver for enterprise clients seeking to streamline their accounts payable processes. ## Challenge The primary and most pressing challenge is the **excessively long and inconsistent onboarding timeline**, currently averaging around five months from contract execution. This is well above the industry average and is a major point of friction. The process is hampered by several interconnected issues: - **Data and Credential Management:** A lack of a single "source of truth" for account credentials leads to discrepancies between systems. Credentials are stored and updated in multiple places (e.g., SmartSheets, Data Services, UBM), causing sync issues and manual workarounds. - **Onboarding Process Complexity:** The initial onboarding template is considered overly burdensome by customers, requesting too much information upfront and creating a poor first impression. There is no clear, phased approach for collecting essential versus "nice-to-have" data. - **Internal Coordination Gaps:** There is confusion and inconsistency in handoffs between teams (Sales, Customer Success, Data Services, Onboarding). This leads to missed requirements, late discovery of issues (like needing EDI bills instead of portal access), and a lack of clear ownership for tracking credential updates from customers. - **Account Reconciliation:** Discrepancies in the total number of active accounts between different internal systems (UBM vs. Data Services) create reporting inaccuracies and operational confusion. This is exacerbated by the system's creation of "virtual accounts" for EDI bills and a lack of automated processes for closing finalized accounts across all platforms. ## Goals The conversation reveals several clear customer goals that the service aims to fulfill: 1. **Reduce Onboarding Time:** Dramatically shorten the onboarding cycle from the current five-month average to be more competitive, ideally aligning with or beating the industry standard. 2. **Achieve End-to-End Automation:** Fully automate the bill upload, processing, and payment workflow. This includes the ability to automatically scan and validate invoice dates, extract account data, and sync credentials seamlessly between all systems. 3. **Gain Clear Visibility and Reporting:** Have accurate, real-time visibility into their account portfolio, processing status, and performance metrics (SLAs). They need a single, reliable source for account counts and processing timelines to support their own internal reporting and financial close processes. 4. **Streamline Credential Management:** Implement a unified, automated system for managing and updating utility website credentials to ensure bill pulls are successful and payment information is always current. 5. **Simplify the User Experience:** Move towards a more customer-friendly onboarding process where they can simply provide invoices or a credential, and the system handles the rest, as opposed to manually filling out extensive spreadsheets.
Weekly Priority Meeting
## **Invoice Processing Status and Urgent Backlogs** The meeting opened with a critical review of invoice processing volumes, revealing significant backlogs across major customer accounts. The data, current as of the previous afternoon, showed a mixed picture with many invoices not yet processed. Specifically, there were 53 invoices for one customer, with nine already past due, and another had over 2,000 invoices with only about 1,000 processed month-to-date. This highlighted a pressing capacity issue, as the team is falling behind on the volume of invoices received, risking service level agreements and customer satisfaction. ## **Critical Gaps in Reporting and Dashboard Visibility** A major discussion point was the inadequacy of current Power BI reports for tracking progress, which currently only filter by the date an invoice was created in the system. - **Need for invoice date filtering:** It was stressed that reports must also allow filtering by the actual invoice date to accurately reflect processing performance and customer billing cycles. A ticket needs to be created to address this gap. - **Creating a unified operations dashboard:** There is a high-priority initiative to transform manually compiled spreadsheets into a live, comprehensive dashboard. This dashboard must cover all customers and service types, providing a daily, at-a-glance view of processing status for the entire team. - **Focus on underlying data transparency:** A key principle established was that while the dashboard is for monitoring, everyone must understand the source data and calculations behind it. This ensures the team can troubleshoot discrepancies, answer questions authoritatively, and manually pull data if the dashboard has technical limitations. ## **Systemic Issues: Broken Batches and Data Black Holes** The conversation identified several systemic risks in the data pipeline causing invoices to go missing or get stuck. - **Ongoing broken batch problems:** The number of "broken batches" - where invoice files fail to import correctly - has increased again. While a refactor of the underlying system is nearing completion to mitigate this, it remains a concern. - **Nervousness around invisible failures:** There is significant anxiety about invoices disappearing into a "black hole" where they are output from one system but never appear in the processing queue. Historical instances where over 160 invoices were lost were cited, emphasizing the need for better visibility and tracking to prevent customer complaints about missing invoices. ## **Downstream Impact: Bill Payment File Errors** A surge in bill payment file errors was analyzed, traced back to incomplete setup during invoice processing. - **Root cause in virtual account attributes:** Many errors occur because required virtual account attributes (like specific bank account codes) are missing. This is often because the information was never provided by the customer during onboarding or wasn't properly added to the master data sheet. - **Training and process gap:** A serious concern was raised that operators processing invoices may not be adequately trained or diligent in linking invoices and verifying all required attributes. This creates a domino effect, where an invoice is processed but then fails at the payment stage, causing further delays and requiring manual intervention from customer success or other teams to resolve. ## **Manual Payment Bottlenecks and Credential Issues** The status of manual check payments was reviewed, with a focus on reducing reliance on this slow method. - **Outstanding check payments:** There are 17 checks pending for two high-priority customers, with efforts underway to convert them to electronic payments. Another customer has 26 checks pending, primarily due to persistent credential issues preventing electronic payment setup. - **Priority and escalation:** The strategy is to resolve the electronic payment setup for the two main customers first, then tackle the credential issues with the other. This is critical because manual checks inherently slow down the payment cycle. ## **Resource Allocation and Contractor Management** The discussion concluded with a focus on optimizing the contractor workforce to address the identified operational gaps. - **Need for clear resource mapping:** There is a lack of clarity on where contracted resources are currently allocated and what specific tasks they are performing. A directive was given to compile a list detailing each contractor's primary focus area (e.g., downloads, data entry, outreach). - **Ensuring productivity and proper placement:** The goal is to audit whether contractors are effectively utilized and to strategically reallocate resources if necessary. The intent is to ensure that all available personnel are deployed to the highest-priority tasks, such as clearing invoice backlogs or managing download queues, rather than having underutilized capacity.
Arcadia Follow up
## **Current State and Goals for Arcadia Integration** The meeting primarily focused on the ongoing integration of the Arcadia system for automated bill retrieval, with a specific and urgent emphasis on the Ascension account. The core goal is to establish a reliable, automated pipeline to replace or augment the current manual processes. To achieve this, the discussion centered on understanding the existing setup, defining the necessary technical steps for a functional prototype, and identifying the broader system impacts to ensure a smooth transition. ### **Existing Prototype Setup & Immediate Actions** The current proof-of-concept uses a local PHP script to process bills. The immediate plan is to enhance this prototype into a more robust staging solution before full production integration. - **Local Script Process:** A PHP script running on a developer's machine ingests a CSV file of account data. It uses provided API tokens to call Arcadia's credentials endpoint and, if access is granted, fetches PDFs which are then stored locally in a designated directory. This serves as a complete end-to-end test. - **Enhancements for Staging:** The focus is on moving this process from a local machine to a server environment. This involves connecting the script directly to a database for credentials and modifying the output to drop PDFs into a new, dedicated directory that mimics a production landing zone. Additional logging for API calls (who made them, when, and the response) is also a priority for traceability. - **Naming Convention Definition:** A critical next step is to finalize and communicate a strict file naming convention for the PDFs retrieved by Arcadia. This convention must reliably encode the client, vendor, and account number to allow downstream systems to parse and process the files correctly, even if existing manual processes do not consistently follow such standards. ### **System Reports & Data Tracking** A significant part of the discussion involved understanding how current operational reports would be affected by the Arcadia integration, ensuring that automatic retrieval doesn't create gaps in visibility. - **Understanding the Lab Report:** The primary tool for tracking invoice status is the "lab" report, a Power BI dashboard that combines data from UBM and Data Services (DS). It shows the last invoice date received for each client-vendor-account combination and the projected date for the next invoice, highlighting accounts that are late. - **Impact of Arcadia Integration:** The key insight is that Arcadia does not directly update these reports. Instead, the reports rely on data updated by downstream systems. When an Arcadia-retrieved PDF is processed through the standard Data Services (DSS) pipeline-just like a manually downloaded bill-it will automatically update the "last invoice date" in the system. Therefore, successful DSS processing is the critical link for Arcadia to reflect correctly in all operational reports and task lists. ### **Duplicate Invoice Handling** The conversation clarified the existing multi-layered system for preventing duplicate invoices, which Arcadia-sourced bills must seamlessly integrate with. - **Checksum at Ingestion:** The first automated check occurs the moment a PDF is uploaded. The system generates a checksum (a digital fingerprint) of the file and compares it to existing records. If an identical file has already been processed, it is rejected immediately as a duplicate. - **Business Logic in DSS:** A second, more nuanced check happens during DSS processing. If the key fields of a bill-client, vendor, account number, service period, and amount due-match an existing invoice, DSS will flag it as a potential duplicate for human review, even if the PDF files themselves are different (e.g., a web download vs. a scanned copy). - **Implication for Arcadia Testing:** This means that for the initial January bill run, there will likely be significant overlap with manually retrieved bills. The system's duplicate handling is designed to manage this, allowing the team to test the Arcadia flow without necessarily disrupting the existing manual workflow for the same period. ### **Task Management & Workflow Automation** A major operational question addressed was how the automatic population of bills via Arcadia would interact with the manual task system for bill retrievers. - **The Core Challenge:** The FDG Connect system generates tasks for operators based on invoice cycles. If Arcadia automatically delivers a bill, the corresponding manual task should ideally be dismissed to avoid redundant work. - **Proposed Solution:** The prerequisite for automating task dismissal is the reliable identification of the client, vendor, and account number from the Arcadia-sourced PDFs (via the naming convention). Once this is achieved and the bill is successfully processed through DSS, the system should be able to automatically dismiss the related task for that billing cycle. - **Transition & Webhooks:** The discussion looked ahead to moving from one-off "batch" retrievals to a continuous, automated process. The future state involves setting up webhooks with Arcadia so that when a new bill is available, it is pushed automatically, starting a new 30-day cycle for the next expected invoice, thereby creating a fully automated pipeline. ### **Broader Project Context & SOC 2 Preparation** The meeting concluded with a brief alignment on other important initiatives, particularly the upcoming SOC 2 compliance audit. - **Prioritization of Arcadia:** The integration work, especially for Ascension and a few other key customers, is the top technical priority for the immediate timeline, with the goal of having a solid handle on the process by the end of the month. - **Efficient SOC 2 Collaboration:** To avoid the time-intensive meeting-heavy approach of past audits, the plan for the upcoming SOC 2 (and SOC 1) work with Eden Data is to establish a clear, asynchronous collaboration model. The intent is to have a single point of contact manage requests via Slack or tickets, providing detailed, actionable tasks to engineers (like Dan and Jay) without requiring them to be on frequent calls, thus minimizing disruption to development workflows.
Plan for Arcadia
## **Current State and Objective of Arcadia Integration** The meeting focused on accelerating the integration of Arcadia, a third-party provider, into the existing bill processing pipeline. The primary goal is to leverage Arcadia to automatically download and process bills for the Ascension client, thereby circumventing the slower, manual process currently handled by operators using the Web Downloader Helper. There are approximately 1,800 credentials (bill sources) ready for processing from Arcadia, with the ultimate aim of moving vendor bill retrieval away from internal tools and towards more robust third-party providers. The discussion centered on three key pillars: tracking the source of bills, automating the new workflow, and building foundational reports. ## **The Challenge of Source Tracking and Differentiation** A core technical requirement identified is the ability to distinguish bills sourced from Arcadia from those coming through other channels, such as the internal Web Downloader Helper or another third party like BDE. Currently, files from BDE enter the system without clear source tagging, simply appearing via a file watcher. For Arcadia, it is crucial to implement a method to tag these files upon ingestion into the system to enable accurate tracking, reporting, and process auditing. The proposed solution is to assign a dedicated system user (e.g., "Arcadia") to all bills uploaded through this new channel, which would be reflected in the `created_by` column of the database, unlike the current empty user field for file-watcher imports. ## **Credential Management and Data Mapping** A significant prerequisite for automation is the secure storage and management of client credentials used by Arcadia to access billing portals. These credentials are specific to each client account and include vital metadata such as the client account number, client name, and vendor. A new database table needs to be created to store this information, mapped to existing client and vendor records within the system. This mapping is essential because, when a bill is downloaded, the system must use the account number from Arcadia to look up the corresponding internal Client ID and Vendor ID. This information is then passed to the document processing service (DSS) during upload to ensure the bill is correctly associated. ## **Designing the Automation Service Architecture** The conversation explored the technical design of the service that will interact with Arcadia. The envisioned service would utilize the new credential table to retrieve access details for client accounts. It would then call the Arcadia API to download PDF bills. For each successful download, the service would subsequently call the DSS upload API, passing the PDF along with the mapped client and vendor identifiers. The design explicitly avoids using a separate FTP folder or file-watcher setup for this new source; instead, it aims for a more direct, service-to-service integration that pushes files into the processing queue with the correct "Arcadia" source tag. ## **Immediate Next Steps and Path Forward** To maintain momentum, immediate action was emphasized. The team plans to spend time detailing the service setup. The initial, non-automated flow involves using a CSV file of credentials, which can be sent to Arcadia to retrieve a batch of PDFs. These PDFs can then be used for initial testing to verify the upload and client-matching logic before the full automated service is built. The focus for the current week is to make tangible progress on both the credential management system and the service design to hand off clear tasks to the development team.
Plan for Arcadia
## Summary The meeting focused on operational coordination and task planning. ### Coordination with Jake and Offshore Team A key action was to connect with Jake to receive and relay necessary instructions. This coordination is critical for ensuring alignment on task execution and priorities. ### Offshore Team Schedule and Availability The Offshore team's upcoming reduced availability was noted, as they will be working a half day on the following day and will be entirely off on Wednesday. This information is essential for planning deadlines and managing workload expectations to avoid delays. ### Planning for Arcadia Setup Discussion A follow-up discussion was scheduled to talk through the setup process for Arcadia. The timing for this conversation was left open, with a request to confirm availability later in the day to solidify the plan.
Missing Bills Report
## Summary The meeting centered on addressing critical issues with how missing utility bills are reported to customers, with a particular focus on the problematic inclusion of internally created "mock" bills in these reports. The discussion involved analyzing the shortcomings of existing reporting tools and defining the requirements for a new, streamlined solution. ### Problem Statement: Inaccurate "Missing Bills" Reporting The core problem is that customers receive reports listing bills as "missing" when the company has actually created placeholder mock bills internally. This creates unnecessary concern and extra manual work for customer support teams who must investigate these false positives. - **Mock bills are misleading customers:** The current logic in reports like the "LAD report" or "billing accounts report" does not distinguish between truly missing bills and those where a mock bill has been created as a temporary placeholder. Customers see these as gaps in service, leading to repeated inquiries and eroding trust. - **Underlying data inconsistencies compound the issue:** Problems with account number formats (flipped numbers, extra spaces) create duplicate accounts in the system. Furthermore, delays in data synchronization between the UBM platform and the Data Services (DS) backend mean a bill might be "in house" but not yet visible to the customer, incorrectly flagging it as missing. ### Analysis of Current Reporting Tools Existing reports were deemed insufficient or too complex for solving the immediate problem, as they are built on flawed or overly intricate logic. - **The "Bill Health" or "LAD" report has fundamental issues:** While its logic for calculating invoice due dates and lateness works, it suffers from underlying data cleanup problems. The report's complexity, with multiple tabs and calculations, is designed for manual operational analysis, not for providing a clear, customer-facing status on missing *actual* invoices. - **A cleanup effort is inevitable but separate:** It was acknowledged that a broader data cleanup initiative for the entire platform is upcoming and necessary, as signaled by impending audits from clients like PPG. However, this is a distinct, larger project from the immediate need for an accurate missing bills report. ### Defining Requirements for a New Solution The primary goal is to create a new, focused report that provides a single source of truth for customers regarding genuinely missing bills. - **Core deliverable: A "Missing Bills" report:** The group agreed to name the new output simply the "Missing Bills" report to avoid confusion with existing tools. Its sole purpose is to show when the last *actual* (non-mock) invoice was received for a given account. - **Key data points for the new report:** The report should include essential utility information like location name, commodity type, and account number to be actionable. Crucially, it must **exclude all mock bills** from its calculations. - **Simplified logic is preferred:** The proposal is to build a new report from underlying data sources rather than trying to fix the complex LAD report. This new report would forego complex latency calculations in favor of showing the last received invoice date, providing a clearer picture. ### Technical Considerations and Implementation Plan The discussion turned to the feasibility of building the new report, identifying available data points and potential hurdles. - **Mock bills are technically filterable:** A key technical insight was that all mock bills created within the UBM platform have a specific flag that can be used to exclude them from queries. This makes the primary requirement technically achievable without deep changes to existing report logic. - **Challenges with data reconciliation:** A proposed shortcut-automatically considering any account in UBM but not in Data Services as invalid-was rejected as too risky. Without manual diagnosis, there's no guarantee these represent errors rather than simple synchronization delays or setup oversights. - **Scope limitations are clear:** The new report will not solve perennial data issues like duplicate accounts caused by formatting errors. Identifying and cleaning these duplicates remains a manual operational task outside the scope of this development work. ### Next Steps and Prioritization The conversation concluded with an outline of immediate actions and a recognition of competing priorities. - **Immediate action is to design the query:** The plan is to review existing data pull instructions, incorporate the step to filter out mock bills using their flag, and design a coherent query for the new report. This design work is the immediate next step. - **Development must be prioritized among other tasks:** The engineering work for the Missing Bills report will need to be queued alongside other critical items, including performance improvements for Power BI and issues related to a separate system called Ascension. The report is urgent but not the only technical priority. - **Acknowledgment of resource constraints:** There was a clear understanding that solving the root causes (like data synchronization and cleanup) would require more than just a new report, potentially involving other teams, but current resources are limited. The new report is seen as a crucial stopgap to alleviate immediate customer pressure.
Plan for Arcadia
## Summary This meeting centered on the integration of a new data retrieval service, Arcadia, focusing on operational processes, client selection, and the development of necessary tracking systems. Key challenges around credential management, data synchronization, and reporting visibility were discussed in depth to ensure a successful rollout. ### Credential Management and Data Synchronization This section detailed the complex process of handling login credentials when using the Arcadia service, highlighting the need for clear procedures to correct failures and synchronize data across systems. A primary distinction was made between two products: the "plug" service and "credential management." For the plug service, the workflow involves receiving a report of failed credential attempts (e.g., 15 out of 185), which are then fixed and resent. When credential management is involved, a more collaborative, conversational process with the client is required to resolve bad credentials managed on their end. The core challenge is defining clear responsibility lines ("swim lanes") for fixing credentials between the internal team and the client. - **Centralized System for Correction:** A major operational gap was identified: the current use of Excel for tracking credential corrections is unsustainable. The immediate need is for a centralized system to log Arcadia API calls, timestamp corrections, and document which credentials were validated as correct for each customer. This system is essential for auditing and ensuring fixes are applied consistently. - **Resolving Data Conflicts:** A significant technical hurdle involves discrepancies between credentials stored in the UBM system and the Data Services (DS) system. Since syncs are not real-time, the systems cannot autonomously determine which credential is correct when they disagree. The proposed solution involves identifying duplicates and then requiring manual human review to establish a single source of truth. A future state would involve a single, third credential table that all applications point to, but an interim validated table within the DS app is planned. - **Initial Workload Concerns:** It was acknowledged that initially, cleaning up credential discrepancies could be a substantial, time-consuming task, especially with a high failure rate at launch. The hope is that after a significant initial cleanup, the volume of weekly discrepancies will trend downward. ### Client Selection for Initial Arcadia Rollout The discussion evaluated which clients should be part of the initial, phased rollout of the Arcadia integration, balancing risk and operational learning. There was strong concern about including a high-profile, problematic client ("Victor") in the first phase, as any integration issues could exacerbate an already tense relationship. The consensus was to start with smaller, more manageable clients to allow the team to adjust and correct anomalies with less visibility and pressure. - **Recommended Initial Clients:** Three clients were selected for the initial rollout: Park, PNB, and Medline. Medline was noted as a good candidate because their credentials had recently been validated and were confirmed to be working. Jacobs was designated as the next candidate following these three. - **Rationale for Phased Approach:** This cautious approach allows the weekly steering committee to reassess progress and readiness before escalating to larger or more complex clients. The team is proceeding with setting up these initial clients for the Arcadia pull. ### Tracking and Reporting for Arcadia Data Retrieval A critical requirement identified was the need for a real-time tracking dashboard to monitor the success of Arcadia data pulls, moving away from unreliable, manual checks across multiple systems. The goal is to create a client-level tracker that shows, for a given month, the universe of accounts, which ones have been requested from Arcadia, and which have been successfully received. This provides visibility into what is still "missing" at any point during the month, not just at the period's end. - **Data Aggregation Challenge:** The tracking logic must account for Arcadia's operational model: they pull data at the credential level, not the account level, and return data for all accounts under that credential. This can result in multiple API calls and potentially multiple PDFs for a single account. The proposed view needs to roll up this activity to answer a simple question per account: "Did we get it this month?" - **Handling Unmatched Data:** A related issue is that Arcadia may pull data for accounts not under management with the client (e.g., 10 accounts on a utility site, but only 7 are contracted). These unmatched PDFs would go into a separate folder. A significant business concern is being charged for processing these successful but unnecessary PDFs. A future "preview" API endpoint from Arcadia could help selectively pull only needed accounts, but this is not currently available. - **Urgency for a Solution:** With an initial pull of 1800 accounts already executed, the necessity for such a tracker is immediate. The team cannot rely on piecing together information from multiple legacy systems (DS Lab, UBM Lab, etc.) to answer client inquiries about missing invoices. The new tracker must provide a clear, authoritative status for each account.
Daily Progress Meeting
## Summary The meeting covered a range of operational topics, focusing on current data issues, the planning of upcoming procedures, team responsibilities, and resource allocation. The conversation was centered on improving processes and clarity to address ongoing challenges and prepare for organizational changes. ### Current Data Processing and Cleanup A significant portion of the discussion was dedicated to ongoing data cleanup efforts and processing issues with specific vendors and customer accounts. - **Pending Bills and Data Validation:** There are several batches of bills from specific dates still pending in "data valid verification," with suspicions that a bulk update attempt may have caused them to fail. The need for cleanup is emphasized to prevent future escalations. - **Targeted Cleanup for Specific Accounts:** Efforts are underway to reprocess bills for certain customers after reinstating validations, mirroring a previous cleanup process done for another account. This indicates a proactive approach to correcting data integrity for impacted clients. - **Tracking and Accountability:** There is a focus on identifying who is responsible for reviewing and resolving these lingering data issues, particularly for a specific park account, to ensure accountability and progress. ### SOP Meeting and Process Standardization Planning for an upcoming Standard Operating Procedures (SOP) meeting was a key agenda item, with several critical areas identified for documentation and alignment. - **Primary Focus Areas for SOPs:** The meeting will concentrate heavily on standardizing the **onboarding process** and establishing clear procedures for **leadership reports and metric reviews**. The goal is to create repeatable, documented processes for these core activities. - **Including New Workflows:** There was agreement to add discussions about the workflow for tasks being transitioned to Arcadia, specifically to clarify the process flow, flag logic, and expectations to eliminate team-wide confusion. - **Additional Process Documentation:** The need for an SOP related to invoice processing SLAs (Service Level Agreements) was also noted, highlighting a desire to formalize timing and expectations across various operational tasks. ### Metrics, Reporting, and Customer Communication Improving the accuracy and automation of metrics was discussed as a priority to meet leadership demands and enhance internal visibility. - **Automating Manual Reports:** A plan was set to take over the manual, weekly reports currently run by team members and build them into an automated Business Intelligence (BI) dashboard. This will save significant time and provide daily visibility into key performance indicators. - **Driving Factors for Reporting:** The push for better "missing bill" tracking and reporting is directly linked to current operational problems, as customers are now demanding more transparency due to these issues. - **Progress on Dashboard Development:** Encouraging progress was reported on the systematic pulling and visualization of metrics, which is expected to soon replace the need for manual report compilation. ### Team Structure, Handoffs, and Accountability Clarifying team responsibilities, especially during customer onboarding and transitions, was a major point of concern to prevent tasks from being dropped. - **Ownership of Credential Acquisition:** A clear issue was identified where team members assisting with obtaining utility credentials during onboarding were stopping work once a customer went "live," leading to dropped tasks. It was stressed that onboarding ownership continues until credentials are fully secured, and handoffs require explicit communication. - **Utility Outreach Team Capacity:** It was acknowledged that there isn't a robust, dedicated utility outreach team, so tasks cannot simply be dumped onto a non-existent group. Accountability must remain with the team that started the work. - **Coordination with Rebates Team:** Challenges in tracking progress were noted when members of the rebates team intermittently assist with tasks, as their primary duties can pull them away, creating blind spots in the process. ### Onboarding, Training, and Resource Allocation The discussion turned to leveraging new resources and refining training to address operational bottlenecks and improve process execution. - **Utilizing New Co-op and Contractor Resources:** With new co-op students and contractors becoming available, there was a strategy to potentially assign them to work on the complete and proper "lab" (likely a training or quality assurance environment) process, which has been difficult to maintain. - **Strategic Cleanup Priorities:** A directive was given to avoid spending time cleaning up data for a specific problematic customer ("Bandit") that is expected to churn soon, and to instead focus cleanup efforts on other accounts to maximize resource efficiency. - **Clarifying Roles for Support Staff:** Questions were raised about the current assignments of support staff, indicating some ambiguity in the division of labor between tasks like data downloads and utility outreach components.
Arcadia Integration Plan
## Summary The meeting covered a wide range of critical operational and technical initiatives, from urgent infrastructure migration and credential system overhauls to the integration of new automation services and addressing persistent process bottlenecks. The core theme centered on moving from reactive, manual fixes to building scalable, automated solutions that provide full visibility into data flows and system states. ### Urgent Cloud Infrastructure Migration A pressing priority is the immediate migration to Azure to alleviate severe operational and compliance burdens. The current on-premises environment has become unsustainable, creating significant overhead in managing infrastructure, vendor security, and compliance reporting. A recent incident involving false positives in a network penetration test report, which was incorrectly shared with a potential customer, highlighted the risks and wasted resources associated with the current setup. The migration to a managed cloud service (Constellation) is seen as essential to offload these challenges and prevent future, time-consuming fire drills. ### Overhaul of Credential Management and Validation Establishing a single source of truth for customer credentials is a foundational project to resolve chronic data issues. Currently, credentials are stored disparately across systems like UBM and DSS, leading to mismatches, manual reconciliation, and failed automated processes. The plan involves three key phases: first, a cleanup of existing credential data; second, the development of a unified interface within DSS to view and manage credentials by customer, vendor, and account; and third, ensuring all systems point to this single credential table. A dedicated "credential validation" view is also planned to instantly flag discrepancies between systems, which will be crucial for ongoing maintenance and support. ### Arcadia Integration for Automated Bill Retrieval Integrating the Arcadia service to automate PDF bill retrieval from vendor portals is a major automation initiative. While initial tests are promising, significant architectural considerations must be addressed to ensure reliability and auditability. Key requirements include creating a robust link between the Arcadia API request/response and the specific client-vendor-account it belongs to, establishing a DSUBM vendor crosswalk, and ensuring the entire chain-from credential used to the final processed bill-is fully traceable. Initially, the focus will be on processing Arcadia-retrieved PDFs through the existing DSS pipeline, postponing a direct JSON integration to avoid introducing new quality assurance complexities prematurely. ### Standardization with Build Templates and AI Processing Leveraging and cleaning up "build templates" is critical for standardizing bill processing and reducing error rates. A live system now uses these templates to guide AI (LLM) processing, providing consistent instructions based on customer-vendor-account setups. However, a cleanup of approximately 60,000 legacy line items is required, and a gap remains for roughly 20% of bills (e.g., from BDE or email) that lack the necessary metadata to map to a template. The long-term vision is a self-service model where issues can be traced directly back to the build template setup, shifting the focus from fixing individual bills to correcting root configuration issues. ### Operational Challenges and Team Process Alignment Internal operational inefficiencies and communication gaps are creating significant drag. Examples include cumbersome, manual email-based workflows for simple client updates that should be automated in the UI, and developers working on features with zero user adoption. There is a strong need to better define roles and expectations, particularly in communication for remote work, and to prevent system-level personnel from being pulled into individual bill-level support tasks. A strategic shift is needed to empower customer-facing teams with better tools and visibility, allowing technical teams to focus on systemic product fixes rather than reactive firefighting.
Constellation bills
## **Customer** The customer is an energy management company operating under the Constellation brand, with a dedicated team (Navigator) handling customer success and operations. Their core function involves managing and paying utility bills for their commercial clients. This requires establishing and maintaining thousands of online credentials (usernames/passwords) with various utility providers to electronically access billing statements and account data. Their operations team is responsible for the initial setup and ongoing maintenance of these credentials, which is a complex and resource-intensive process. ## **Success** The most significant success achieved with the service is the strategic decision to offload the entire burden of credential management. By transitioning this function, the customer can eliminate the substantial internal effort and cost associated with setting up new utility accounts, troubleshooting login failures, and managing multi-factor authentication (MFA) challenges. The service will provide a dedicated, managed instance to handle the enrollment and continuous health monitoring of up to 7,000 credentials. This shift allows the customer's team to re-focus their efforts on higher-value activities related to their core business of customer service and payment operations, rather than on the technical minutiae of utility website integrations. ## **Challenge** The primary challenge centers on the initial onboarding and data transition process. The customer's existing credential data is often incomplete, lacking fields like phone numbers which are sometimes required by utilities for verification, creating a potential bottleneck for enrollment. Furthermore, handling Multi-Factor Authentication (MFA) for existing accounts presents a complex hurdle. For credentials they currently manage, resolving MFA may require cumbersome coordination with end-clients to receive codes or a lengthy process to change the authentication method to one the service can control. The challenge is to navigate these provider-specific MFA landscapes and data gaps efficiently to minimize delays in bringing accounts live on the new managed platform. ## **Goals** The customer's key goals for this engagement are clearly defined: - To successfully onboard an initial batch of approximately 2,900 active credentials from a specific client into the managed service as a first phase. - To ultimately migrate a total of up to 7,000 credentials to the service for ongoing management and maintenance. - To establish a transparent and frequent (initially weekly) reporting cadence to track the status of credential enrollment, identify any issues requiring their action, and understand the reasons for delays. - To gain real-time visibility into credential status, either through shared tracking documents or API access, to inform their own internal operations and billing workflows. - To achieve a seamless technical integration where they can reliably pull utility statement data via API from the new managed instance, just as they do from their current platform.
Discuss Aurora Champion Invoice
## Customer The customer is a large university with a complex utility management structure, operating multiple physical locations. They have numerous individual meters for electricity consumption across their campus, but these are consolidated into a single, large summary bill from their utility provider. Their background indicates a need for detailed, location-specific energy usage data to support accurate tracking, reporting, and potentially cost allocation across different departments or buildings within the university. This reflects a common need in institutional settings for granular data from aggregated utility invoices. ## Success The primary value demonstrated for the customer lies in the platform's core functionality for handling complex utility data. The discussion confirms that the system is fundamentally capable of the required task: breaking out a consolidated bill into individual sub-accounts based on meter numbers. The proposed solution involves creating distinct data blocks for each meter's usage, which aligns with established best practices for customers in similar situations. This capability is central to the product's value proposition for managing large, multi-meter accounts. ## Challenge The most significant challenge encountered by the customer has been a prolonged failure to achieve the correct implementation of the bill breakout feature. Despite the product's capability, there was a clear breakdown in execution. The customer received a consolidated bill where all usage was incorrectly attributed to a single building, rendering the data unusable for their location-based analysis. This issue persisted for an extended period, with customer requests reportedly going unaddressed, leading to frustration. The problem is compounded by its scale, as all historical bills dating back to the onboarding period (approximately September 2024) require manual rework to correct the usage allocation across meters and locations. Furthermore, the recent switch to a new energy supplier adds another layer of complexity to ensuring the solution is correctly applied moving forward. ## Goals The customer's objectives are clear and center on data accuracy and utility management efficiency: - To have electricity usage correctly broken out and attributed to its corresponding physical location (building) based on the individual meter number. - To have all historical billing data corrected retrospectively to reflect the proper usage per location. - To ensure that the billing structure correctly handles both distribution and supply charges within the new, corrected format. - To establish a clean, sustainable process for future bills, especially under their new energy supplier, to prevent a recurrence of the issue.
Arcadia Rollout Plan
## Summary The meeting focused on critical operational challenges with manual invoice downloads and credential management, centering on the urgent need to fully leverage the Arcadia platform to automate these processes and ensure timely bill payments. ### Escalating Problems with Manual Invoice Processing The current manual system is failing, causing significant delays and financial risks. A specific incident was highlighted where a key client questioned why a utility bill issued on December 16th wasn't pulled until January 5th, missing its January 2nd due date. This delay exposes the company to potentially **astronomical late fees**, especially given the high dollar value of the accounts in question. The situation is deemed unsustainable, with the team acknowledging they are consistently behind despite extensive manual effort. ### The Strategic Decision to Fully Adopt Arcadia A consensus was reached to abandon the cautious, phased approach and fully integrate Arcadia for both credential management and automated invoice downloads. The reasoning is that the current state is so problematic that the risk of moving to Arcadia is lower than the risk of maintaining the status quo. The immediate plan is to transition all invoice downloading for a major client portfolio (Extension) to Arcadia as soon as possible, followed by another large portfolio (Wiktra). ### Complexities of Credential Management with Arcadia While pushing existing working credentials to Arcadia for download purposes is straightforward, setting up new or correcting faulty credentials through their management service is complex and time-sensitive. - **Required Data Hurdles:** Arcadia's credential setup requires detailed information (e.g., tax ID, corporate name, phone number) that the company does not typically collect, which could create the same blockers currently experienced internally. - **Lengthy Setup Timelines:** The credential management service itself can take up to **three months** to fully establish an account, underscoring the urgency to start the process immediately to avoid the same problems recurring months later. - **Validation and Handoff Process:** A clean, validated list of credentials and active accounts must be compiled and sent to Arcadia. This involves reconciling data between different internal systems to ensure accuracy before handoff. ### Technical Integration and Process Flow The discussion outlined the technical pathway for Arcadia-sourced data to enter the company's systems, prioritizing a low-risk initial approach. - **Initial PDF-Based Integration:** Initially, Arcadia will deliver PDFs, which will be fed into the existing Data Services (DS) pipeline exactly like a manually downloaded bill. This avoids creating a third, parallel system to manage. - **Future JSON Integration:** The ultimate goal is for Arcadia to send structured JSON data directly to the billing system (UBM), bypassing DS entirely, but this is a longer-term project due to mapping complexities. - **Preventing Duplicate Work:** Internal tools will be modified to automatically flag and remove tasks for accounts assigned to Arcadia, preventing team members from manually downloading bills that Arcadia is already tasked with retrieving. ### Mitigating the Risk of Duplicate Payments A major concern is that Arcadia might download bills that have already been processed, leading to duplicate payments. - **Existing Duplicate Checker:** The system has a duplicate checker that catches identical bills entering from different sources (e.g., mail vs. manual download), which should also catch Arcadia-sourced duplicates. - **Critical Exceptions:** This checker may **not catch notices or bills with late payment fees**, as these could be considered different documents. - **Testing Strategy:** One proposed method is to let Arcadia pull all bills and rely on the duplicate checker, then analyze the results. An alternative, more cautious approach is to first identify and exclude high-value accounts that have already been paid this cycle, starting the test with lower-value or unpaid bills to minimize financial risk. ### Immediate Action Plan and Next Steps The team agreed on a rapid execution plan to operationalize the decision. - **Priority Portfolio:** The Extension client portfolio (over 3,000 accounts) will be the first group transitioned to Arcadia. - **Credential List Finalization:** A validated and cleaned list of credentials and active accounts for Extension will be prepared for internal review, with a target to finalize and send it to Arcadia. - **Alignment with Arcadia:** A follow-up call with Arcadia is needed to clarify required fields for credential management and to understand their process for flagging and resolving data issues during setup. - **Internal System Readiness:** The engineering work to integrate Arcadia's PDF output into the existing DS pipeline must be prioritized to enable the transition.
UBM Planning
## Summary The meeting primarily focused on resolving critical data synchronization discrepancies between the Data Services (DS) and UBM systems, outlining a cleanup strategy, and addressing an urgent technical issue with a vendor portal. A follow-up action on credential management was also discussed. ### Data Synchronization Discrepancies Between DS and UBM The core issue discussed was the mismatch in account records between the Data Services (DS) and UBM systems, with a specific focus on the Ascension account. The root cause appears to be that DS sometimes updates account numbers for existing billing IDs, whereas UBM retains the original account record from the initial setup bill, leading to duplicate or orphaned records in UBM. - **Identifying the Mismatch:** A significant number of Ascension accounts exist in UBM but are no longer present in DS, creating confusion and extra work. For example, the "Anderson" accounts showed a pattern where digits in the account number were flipped between the two systems. - **Root Cause Analysis:** The discrepancy occurs because UBM only creates accounts when receiving data from DS. If DS later updates an account number (instead of closing the old one and creating a new one), UBM is left with a stale record while DS moves on, breaking the link between the systems. - **Impact on Reporting:** This misalignment causes reports, like the late bill (LAD) list, to be inflated with accounts that may not require action, wasting operational time and creating reporting inaccuracies. ### Strategy for Account Cleanup and Investigation A multi-step approach was proposed to clean up the discrepancies, emphasizing caution to avoid creating new problems. - **Initial Focus on Ascension:** The team agreed to start with a deep dive into the Ascension account discrepancies as a pilot to understand the patterns before scaling the effort to other clients. - **Caution Against Premature Removal:** There was a strong recommendation against automatically removing unmatched accounts from the LAD report without full investigation, due to the risk of missing legitimate late bills and facing accountability questions. - **Proposed Investigation Paths:** Several methods to find the correct links between systems were brainstormed: - Comparing service addresses and commodities, which are more distinctive than location addresses, to match records. - Requesting a list of *closed* accounts from DS to see if simply closing the orphaned records in UBM resolves many cases, which would be a simpler solution than re-linking. - Checking if historical data exports or unique database IDs exist in DS that could provide a persistent link to UBM records even after account number changes. ### Updates to Reporting and Access Brief updates were provided on enhancements to business intelligence tools and user permissions. - **Enhanced Report Data:** The "raw account number" field has been added to the master counts report to aid in the cleanup and investigation work. - **User Access Management:** A request was made to grant a user named Jeffrey access to all BI reports on the consulting side, a task that requires specific administrative permissions. ### Credential Management System Overhaul An initiative to centralize and synchronize login credentials for vendor portals was discussed, aiming to eliminate future sync issues. - **Immediate Cleanup Required:** A spreadsheet comparing credentials between DS and UBM has been prepared and needs review by the operational team (led by Afton) to determine which records are accurate. - **Long-term Solution:** A new credential management interface is being built within the DSS application. This will serve as a single source of truth, providing instantaneous synchronization between all systems and replacing the current manual and error-prone processes. ### Urgent Issue: Constellation Energy Portal (CDP) Access An urgent, blocking issue was raised regarding the Constellation Energy Portal (CDP). - **Problem Identified:** The company's primary CDP profile has hit a system-imposed limit for the number of linked accounts, preventing any bills from being searched or downloaded, which halts operations for that vendor. - **Proposed Resolution:** The immediate workaround is to request that Constellation create a new portal profile under a different team member's name (e.g., Tara or Mary) to house additional accounts. This is considered a critical fix that needs to be implemented rapidly to avoid billing delays.
تحية سريعة
## الملخص نظرًا لقصر المقطع الصوتي المقدم والذي لم يتجاوز بضع ثوانٍ، لم يتم تناول أي مواضيع أو نقاشات فعلية خلال هذا "الاجتماع". المحتوى الوحيد كان تحية عابرة لا تشكل أساسًا لمحادثة أو اجتماع عمل يمكن تلخيصه.
UBM-DS Alignment
## **Summary** The meeting focused on resolving critical data integrity issues across the billing and invoice processing systems, with a strong emphasis on establishing a single source of truth for account data. Discussions centered on immediate problems with lost invoices, long-term systemic improvements to validation logic, and the need for unified reporting. ### **Immediate Issues with Extension Accounts and System Gaps** A primary concern was a batch of extension accounts that appeared as "completed" in Data Services but never propagated to the UBM (Utility Bill Management) platform. The root cause is suspected to be a broken file stuck in an email queue. With a key development team unavailable, immediate resolution was pending, highlighting a fragile point in the data pipeline. Furthermore, it was noted that UBM currently has a significant backlog of approximately 2,400 items created from December to the present, indicating ongoing processing challenges. ### **Reconciling the Master Account List as the Source of Truth** A major point of alignment was the urgent need to reconcile account totals between Data Services and UBM, as current numbers do not match-sometimes by hundreds of accounts. UBM was agreed upon as the definitive source of truth for what accounts the company and its customers can see. The core problem is that discrepancies cause "missing" invoices to go unnoticed. A manual reconciliation exercise for one customer (Extension) is already underway to identify reasons for mismatches, such as formatting differences in account numbers or unlinked accounts. A broader, systematic cleanup involving pulling complete lists from both systems into a consolidated document (like Excel) for team review was proposed as a necessary next step. ### **Addressing Systemic Design Flaws in Data Processing** To move beyond one-off fixes for recurring validation and mapping errors, a fundamental redesign of the Data Services System (DSS) logic is in progress. The current system relies too heavily on generic AI/LLM rules, leading to frequent failures. The solution involves integrating customer-specific "build templates" directly into DSS. These templates will provide the precise rules (covering UAMs, locations, etc.) for the AI to follow per bill, which should address the majority of validation issues for web-downloaded invoices. Testing for this new logic is planned for the coming weeks. However, invoices from other sources (like BDE email uploads) that lack vendor/customer data will still fall into a manual review queue until a separate solution is developed. ### **Developing Unified Reporting and Dashboards** To improve operational visibility and eliminate manual reporting work, there is a push to develop a dynamic Power BI dashboard. This dashboard would automatically track key metrics like the number of unprocessed invoices per customer, using the same logic currently applied manually. A critical requirement is documenting the exact data sources and calculation logic ("a cheat sheet") to ensure everyone interprets the numbers consistently. A separate Power BI report for labs exists but is currently hampered by performance issues due to its size, requiring optimization before new features can be added. ### **Customer-Specific Account Cleanup Projects** Two specific customer account cleanup projects were reviewed: - **ERA Cleanup:** Work is ongoing and on track for a deadline of February 8th, with a subsequent cleanup for St. Elizabeth scheduled for February 23rd. - **Aurora University:** A significant, long-standing issue was revisited. Historical bills for this client were not processed correctly through DSS, resulting in inaccurate data in UBM. A manual cleanup of approximately 13 months of historical data is required, which involves both adding accounts in Data Services and correcting records in UBM. The client has requested an update by February 16th, and the operational teams will coordinate on the approach for both fixing the backlog and establishing a correct process for future invoices.
Report
## **Project Status and Field Mapping Review** The meeting served as a project status update and working session focused on the integration of reporting systems. The primary objective was to clarify the mapping and purpose of specific data fields within critical financial reports to ensure a seamless transition and configuration for the new system. ### **Progress on Report Documentation and Field Identification** A list of reports and their associated data fields has been compiled into a shared spreadsheet for collaborative review. Significant effort is being invested to document each report's purpose, filters, and data structure, drawing from existing onboarding documents and system exports. - **Compilation of Source Materials:** The documentation integrates reports from multiple sources, including a shared repository from "Engie" and initial onboarding documents provided by Constellation. The goal is to create a single source of truth for all required reports. - **Detailed Analysis Approach:** For each report, the process involves opening the actual report to capture its filter selections and a data sample. This granular approach is intended to provide precise specifications for the development team, though it is recognized as a time-intensive task. - **Centralized Collaboration Tool:** All findings and questions are being centralized in a shared spreadsheet on the team's SharePoint site. A key housekeeping rule was established to **strike through** incorrect or inapplicable information rather than deleting it, preserving the audit trail of changes until consensus is reached. ### **Clarifying Critical Field Mappings for Financial Processing** A major focus was deciphering the exact meaning and destination of fields within a specific "consolidated" financial report. This report is used daily for operational tasks and is the file directly uploaded into the JD Edwards ledger system. - **Urgent Need for GL Mapping:** A critical priority was identified: providing a complete breakout of how each field in the consolidated report maps to the General Ledger (GL). Having this mapping is essential for the Constellation team to begin their development work without delays. - **Leveraging Operational Expertise:** It was confirmed that the financial report in question is used extensively for paying utility bills. The team member who uses this report daily possesses the operational knowledge to define each column's purpose, which is vital for accurate system replication. - **File Format Specifications:** Clarification was provided on the technical format of the delivered file. The operational file is a flat text file without column headers. The column names discussed in the meeting are logical labels applied by the business team for processing, which must now be formally mapped to the corresponding technical fields. ### **Adjusting Timelines Due to Operational Workload** The project timeline required immediate adjustment to accommodate the significant workload of the business team, who are concurrently managing period-end financial close activities. - **Deferring Detailed Review:** The business resources needed to provide field definitions are heavily occupied with closing responsibilities. Consequently, a scheduled working session was postponed to allow them necessary focus time. - **New Checkpoint Established:** A new checkpoint meeting was scheduled for the following week to assess progress on the field definitions. The expectation is that the preliminary work, especially the crucial GL mapping, will be completed by the end of that week. - **Recurring Coordination Meetings:** The teams agreed to reinstate a recurring weekly meeting series to provide overall project updates and track timeline adjustments, ensuring better ongoing coordination. ### **System Interface and Data Delivery Requirements** Alignment was reached on the technical requirements for the new system's output to ensure compatibility with existing downstream processes. - **Mandating Interface Consistency:** A firm requirement was established: the new system must generate an output file that is **identical in format and structure** to the file currently received. This is to ensure that existing JD Edwards programs that process and upload the data will not require any modification. - **Confirming Data Destination:** It was explicitly confirmed that the specific "consolidated report" being analyzed is the exact file that feeds into the general ledger. Therefore, all service start dates and other critical fields present on the current report must be included on the new interface file. ### **Administrative and Compliance Follow-ups** Brief updates were provided on ancillary administrative items necessary for the project's broader context. - **FBO Account Approval:** It was confirmed that the paperwork for the FBO (For Benefit Of) account has been approved by the compliance team, resolving that administrative item. - **Outstanding Fee Question:** A follow-up question regarding the ability to send a specific type of fee (BAI) to the new account was raised. The responsible party committed to checking with internal finance and reverting to the core project team.
Report
## Summary The meeting focused on finalizing the reporting and accounting of work completed in December, with particular attention on two key deliverables: the system status report and the tracking of unplanned work. ### Completion Status of the December System Status Report This section clarifies the delivery status of a major December deliverable. While the report was technically finished, its official completion is being reconsidered due to ongoing feedback. - **Report delivered but pending revisions:** The primary "system status report" was submitted by the end of December. However, significant feedback was received both in late December and in January, which has not yet been incorporated. This outstanding work means the task is not considered fully closed. - **Interconnection with another report:** The system status report is closely linked with another deliverable, the "lab report," and feedback pertains to both. This interconnected complexity contributes to the decision to reclassify the completion date. - **Reclassification to January:** To accurately reflect the state of work, it was agreed to move the official completion milestone for this report from December to January. This avoids presenting the task as finished while acknowledging the substantial effort already invested. ### Allocation and Reporting of Unplanned Work The discussion centered on quantifying and documenting ad-hoc tasks and additional reports completed outside the original plan for December. - **Time allocation for unplanned tasks:** A preliminary agreement was reached to allocate 25% of the total December work time to "unplanned work." This percentage serves as a high-level metric for effort spent on unforeseen or reactive tasks. - **Documentation of additional reports:** Alongside the unplanned work, several additional reports were generated. An agreement was made to allocate 15% of the time specifically to these reports. A detailed list of these specific reports was to be provided separately to ensure precise tracking. - **Finalizing the December accounting:** The confirmation of these percentages (25% for unplanned work, 15% for additional reports) allowed for the finalization of the December work summary. The process concluded with an understanding that the specific list of reports would be submitted to complete the record.
UBM Project Group
## Summary The meeting focused on two primary objectives: finalizing a summary of the team's delivered value for executive review and conducting a detailed review and adjustment of the project plan for the first quarter of 2026. The discussion was centered on ensuring alignment on priorities and realistic timelines for upcoming technical and operational work. ### Project Timeline and Executive Reporting A key discussion point was the plan to formally extend the current engagement through the first quarter, pending executive approval. This extension is intended to solidify foundational improvements and transition to a more sustainable, long-term support model. - **Extension for Foundation Building:** The additional time in Q1 is seen as crucial for overcoming the initial implementation challenges and completing high-priority foundational work, which will reduce the need for intensive prioritization in subsequent phases. - **Preparation of a Value Summary:** There is a direct request for each team to provide a concise paragraph summarizing the business value delivered by their work. This summary, intended for executive leadership, should articulate how specific problems were solved and the resulting impact, serving as a record for performance reviews and organizational recognition. ### Review and Adjustment of the Q1 2026 Plan The majority of the meeting was dedicated to a month-by-month review of the planned work for January through March 2026, with adjustments made based on current progress and revised scope understanding. - **Data Validation and Rule Review:** A significant rules review effort is confirmed as a multi-month project, now expected to span January through March, with the bulk of the critical work targeted for completion in the first two months. - **Power BI Dashboard Development:** The priority for January is to establish reliable data pipelines from source systems into Power BI. The development of specific report views is dependent on this foundational data accuracy being achieved first. - **Credential Management and Account Health View:** A project related to UBM (User Behavior Monitoring) credential management and a holistic account health dashboard is a high-priority item, now slated for a January-February effort with hope for early completion. - **JIRA Integration and Customer Issues:** Work on integrating systems with JIRA for tracking customer issues remains scheduled for February, pending some final tweaks and deeper technical discussions. - **MFA (Multi-Factor Authentication) Initiatives:** Plans regarding MFA implementation from personal phones are tentatively kept in February but acknowledged as requiring further evaluation. - **Integrity Check and Template Layer:** The approach for building system integrity checks has evolved. The focus is now on creating a standardized account template, which will reduce the immediate priority and scope of a separate "Interpret layer," pushing that item later into the quarter. - **Phase Two of Emergency Payments:** A subsequent phase involving bank pulls for emergency payments is noted for March, contingent on continued coordination with external partners. ### Overall Progress and Forward Look The session concluded on a positive note regarding the team's overall trajectory and collaboration. - **Acknowledgement of Team Effort:** The significant progress and consistent engagement from the team in weekly prioritization sessions was explicitly recognized as having placed the project in a strong position. - **Anticipated Future Reporting:** Teams were advised to maintain their own records of delivered value, as a similar summary exercise will likely be required at the end of the first quarter to demonstrate the value of the extended engagement period to stakeholders.
Plan for Arcadia
## Summary ### Task Management and Jira Setup The meeting began with a discussion on establishing a structured task management system for upcoming projects, particularly focusing on the use of Jira's timeline and board views. The primary goal was to create a centralized, running list of all required tasks to ensure clarity and alignment, especially for early-phase spike research tickets. - **Utilizing Jira for project tracking:** It was emphasized that using Jira's timeline view, either within the main board or opened in a new page, would be the method for logging all tasks related to the Arcadia effort and other workstreams. This approach aims to provide full visibility into the work that needs to be done. - **Capturing early research work:** A specific need was identified to document spike and research tickets from the outset of the Arcadia project, ensuring these foundational investigative tasks are properly tracked and accounted for in the overall project plan. ### Arcadia Integration and Updates A significant portion of the discussion revolved around the ongoing integration with Arcadia, covering both code deployment and potential dependencies on the DSS (Data Submission Service). - **Code deployment strategy for edits:** There was a deliberation on when to push completed edit functionality to production. The consensus was to proceed with the deployment since the feature is not yet in active use; however, the ideal process would involve gathering operator feedback and implementing activity history tracking before a full rollout. - **Identifying dependencies on DSS:** A key point was raised regarding the Arcadia integration's reliance on the DSS. If any part of the Arcadia work requires updates from the DSS side, that responsibility will fall to the current team, highlighting a potential area of inter-system dependency that needs monitoring. ### Credential Management for Vendors A detailed update was provided on the progress of managing vendor credentials, which are critical for system integrations, including the upcoming work with Arcadia. - **Current state analysis in FDG and UBM:** - **FDG Project:** Credentials are currently stored in an encrypted form within a Windows credential table in the DDS database. UI development in the DSS app to display these credentials is complete, and work is now focused on building the APIs to fetch this data. - **UBM Application:** An investigation into the UBM (Unified Build Management) system could not locate a dedicated credentials management page or database table. The team is uncertain where UBM credentials are stored, noting a previously shared Excel sheet with thousands of vendor credential entries that may or may not be the source. - **Unifying credential storage:** The long-term objective is to establish a single source of truth for credentials that serves both the DSS and UBM systems, moving away from fragmented and manual storage methods. ### Syncing Challenges and a New Development Approach The conversation highlighted significant challenges with the current, semi-manual process of syncing credentials between different systems and outlined a new development path forward. - **Problems with the current manual sync:** The existing sync process, managed by an individual team member, is prone to periods of being out-of-sync. Major issues include an inability to definitively identify the source of an update (DSS, UBM, or Smartsheet) and, consequently, no clear source of truth. Updates made directly in Smartsheet are especially problematic as they exist completely outside of any integrated system. - **Immediate development priorities:** Given the uncertainty around the sync mechanism, the immediate plan is to develop core credential management functionalities-**add, edit, delete**-within the new system. This will include fields for created/updated dates and a notes section. A formal Jira ticket will be created to capture these requirements. - **Need for architectural clarity:** A fundamental question remains unresolved: whether UBM will point to the new credential database or if another syncing architecture will be required. Collaboration with the team member who currently handles the sync is needed to understand the existing workflow before a new solution can be designed. ### Future Considerations for Arcadia The critical importance of robust credential management was underscored in the context of future work with the Arcadia platform. - **Credential passing for builds:** It was clarified that vendor credentials will need to be passed to Arcadia with every request to retrieve new builds. This dependency makes solving the credential storage and syncing problem a high-priority prerequisite for successful Arcadia integration, elevating its importance on the project roadmap.
DSS Issues
## Summary The meeting focused on critical operational issues related to duplicate bill processing within the system, identifying several interconnected problems and their root causes. The discussion centered on inefficiencies in the duplicate resolution workflow, data integrity issues with bills from automated sources, and systemic bugs that obscure the true status of documents. ### Duplicate Resolution Workflow Challenges A significant challenge is the inability to efficiently verify potential duplicate bills, leading to extensive manual work. When bills are flagged as duplicates, the corresponding PDF often fails to display on the duplicate resolution screen for comparison. This occurs because the system does not store the invoice's header information-such as date, due date, and total charges-on the bill record when it is initially marked as a duplicate. Consequently, team members must manually investigate each flagged bill behind the scenes, a process that has become unsustainable given the volume of duplicates. This issue is compounded by the recent practice of processing change of addresses for all accounts, including those enrolled in web downloads, which has significantly increased duplicate submissions. ### Root Causes of Duplicate Influx The surge in duplicate bills stems from multiple entry points and systemic gaps. One primary source is the **Big Data Energy (BDE)** integration: their automated bot scrapes customer portals per account based on an expected invoice date but does not reference the last bill pulled. This can result in the bot fetching the previous month's bill repeatedly if a new one hasn't posted, causing both duplicates and late bill submissions. Furthermore, bills submitted via BDE or other web downloads often enter the system with "unknown" client and vendor fields, routing them through DSS and contributing to the backlog before they are properly identified. ### Critical System Bugs and Data Integrity Two major technical bugs were identified as exacerbating the problem. First, in the **DSS system**, when a bill is marked as a duplicate based on a file checksum, the system fails to store that checksum on the bill record. This prevents the duplicate PDF from appearing in the resolution queue, leaving reviewers blind. Second, a severe **data integrity issue** was uncovered: bills can be uploaded under incorrect account numbers but still be flagged as duplicates. Since the duplicate resolution screen does not display the account number to which a bill is attached, reviewers cannot easily identify this error. In some alarming cases, bills have appeared in the workflow completely detached from any client, vendor, or account-a scenario that should cause a system crash but currently does not. ### Proposed Solutions and System Improvements Several targeted fixes and enhancements were agreed upon to address these core issues. The immediate technical fix is to correct the bug in DSS so that the checksum **is properly stored** when a duplicate is detected, ensuring the PDF appears for comparison. A second crucial enhancement is to **modify the duplicate resolution user interface** to display the account number (and client/vendor information) attached to each bill. This will allow reviewers to instantly spot mismatches, such as a duplicate bill uploaded to the wrong account. Additionally, there was a proposal to add **system-generated activity logs** for DSS processes within the bill record to improve auditability and tracking. ### Process and Policy Considerations Beyond technical fixes, broader process issues were discussed. The practice of allowing both mailed paper bills and web downloads for the same accounts following a change of address is a major contributor to the duplicate volume. While disallowing dual sources was suggested as a way to minimize duplicates, it was acknowledged that customer demand for fast, digital processing makes this a complex policy decision. The team emphasized that until systemic fixes are implemented, the duplicate resolution queue will remain a significant bottleneck requiring constant manual oversight to prevent processing errors and ensure data accuracy.
DSS Daily Status
## Summary The meeting centered on planning the strategic priorities and technical foundations for Q1 of 2026 and beyond, with a strong focus on resolving systemic pain points and establishing robust, scalable processes. The discussion outlined a series of interconnected initiatives designed to overhaul outdated systems, improve data integrity, and create shared services for the broader Navigator platform. ### Onboarding Process Overhaul **A complete redesign of the customer onboarding workflow is necessary to eliminate reliance on informal, error-prone methods.** The current process, which often depends on emails, spreadsheets, and unrecorded calls, leads to scope creep and inconsistencies. - **Structured Onboarding UI:** The post-contract onboarding experience must be moved into a formalized UI. This system should capture and require verification of all critical contract details like meter counts, account numbers, vendor lists, and pricing plans directly from the customer to serve as a single source of truth. - **Centralized Setup Documentation:** A documented and trackable process for bill setup needs to be created. This involves generating a unique document ID from the onboarding process for each account/vendor combination, where all associated bills can be linked and tracked, eliminating disputes over what was or wasn't received. - **Integration with Build Template Creation:** The onboarding system should seamlessly connect with the creation of build templates. The ideal flow would allow for the automated generation or easy manual creation of a build template based on the verified setup information provided during onboarding. ### Bill Data Processing and System Integration **Improving the flow and accuracy of bill data processing is critical, with particular attention paid to the role of build templates and the integration of new data services.** - **IPS and Build Template Integration:** A major gap was identified where the LLM-based Intelligent Processing Service (IPS) does not currently use the existing customer-specific build templates to enrich bill data. This leads to generic, often incorrect inferences and has resulted in tens of thousands of inaccurate line items that will require a cleanup process. - **Enabling Shared Navigator Services:** The IPS and DSS (Data Storage Service) capabilities need to be packaged as a shared "drag-and-drop" service. This will allow other Navigator applications like Carbon Accounting and Glimpse to send PDFs and receive enriched JSON, enabling them to build their own services without waiting for a full UBM (Utility Bill Management) integration. - **Third-Party Service Management & Logging:** As reliance on third-party data providers (like Arcadia) increases, a new, dedicated logging and mapping layer is essential. This system must track every interaction-monitoring SLAs, logging API call failures, and mapping provider outputs to internal formats-to quickly identify where breakdowns occur in the data supply chain. ### Credentials and Data Source Management **Managing customer credentials for third-party data access requires a systematic, auditable approach rather than the current fragmented method.** - A centralized, filterable system is needed to view all credentials (e.g., for utility portal access) for a given client, vendor, and account. This allows for ongoing verification by clients and ensures any credential changes are updated in one place and propagated correctly, preventing data flow interruptions for high-value contracts. ### Strategic System Transition and Legacy Sunsetting **The long-term strategy involves methodically migrating functionality from the legacy DDS system to the modern DSS platform while clearly defining what remains.** - **DDS Functionality Assessment:** The immediate goal is to catalog what critical functions still reside solely in DDS. The primary items identified were bill setup workflows, summary bill processing, and the underlying customer-vendor-account data (build templates). The intent is to replicate or improve these in DSS. - **Defining the Sunset Path:** The transition is not about a hard cutover but about making DSS so comprehensive and reliable that reliance on DDS naturally diminishes. The focus is on building the new capabilities in DSS first, ensuring they work for operators, and then deprecating DDS components as they become redundant. ### API Development and Future-State Vision **The end-state vision involves exposing clean, normalized data through APIs to both internal applications and external channel partners.** - **Navigator Shared APIs:** The plan is to develop a set of callable APIs that provide access to normalized bill data (in both DSS and application-specific formats like UBM JSON). This would allow technical customers or partners to directly query for their data, serving use cases like integration with platforms such as Azure. - **Empowering Internal Teams:** Application teams like Carbon Accounting will be empowered with the enriched DSS JSON and documentation so they can build their own normalization layers and APIs independently, accelerating development across the Navigator suite. ### Foundational Issues and Success Metrics **The conversation concluded by identifying the root causes of ongoing operational issues and the criteria for success in 2026.** - **Core Pain Points:** The recurring "fires" are traced back to four interconnected gaps: not using build templates in IPS, shortcomings in data acquisition from third parties, inadequate logging/reporting, and drift in master data sources. Solving these is seen as the key to freeing up bandwidth from constant firefighting. - **Expectations for Q1 2026:** There is a strong expectation that by the end of Q1 2026, significant progress must be visible on these core pain points. The goal is to resolve the majority of the daily operational issues reported by Data Services and UBM teams, moving from identifying problems to executing the built solutions.
DSS Matching Logic
## **Analysis and Improvement of DSS Invoice Matching Logic** The meeting centered on a deep technical dive into the Data Services System (DSS) invoice matching logic, with the primary objective of diagnosing persistent issues causing incorrect line item and meter creation. The discussion deconstructed the current multi-phase workflow to identify root causes in vendor matching, data normalization, and the overall integration with the AI (LLM) extraction process, concluding that a fundamental paradigm shift is required to improve accuracy. ### **Overall Problem Statement and Initial Focus** The core issue is that the system frequently fails to correctly match incoming invoice data with existing database templates, leading to the erroneous creation of new meters and line items instead of mapping to known ones. This results in data duplication and workflow inefficiencies. The immediate diagnostic focus was placed on three key areas: improving the core matching logic with potential "fuzzy" matching, cleaning up legacy data, and defining a clear process for handling invoices without an exact template match. ### **Deconstructing the Existing Matching Workflow (Phase 1)** The team meticulously walked through the documented matching flowchart to understand each failure point. - **Step 1: Vendor ID and Account Number Matching:** This initial gate relies on a vendor ID and account number. The analysis revealed critical vulnerabilities: these identifiers can be captured at upload or inferred post-IPS extraction, leading to potential mismatches. A major concern is the translation of a vendor *name* (from IPS) to a vendor *ID* (a number used for database lookup), which may fail if not an exact match. Furthermore, account numbers could be captured differently (e.g., with dashes or leading zeros), requiring normalization to ensure reliable string matching. If this step fails, the process cannot proceed to use an existing template. - **Step 2 & 3: Service Account and Meter Fetching:** Upon successful vendor/account matching, the system fetches all related service accounts (e.g., "Full Service," "Distribution Only") and their associated meters. The logic assumes that for existing customers, these entries should always be present. The discussion confirmed that the system pulls *all* service accounts and meters for the matched account, without initial filtering, which is a potential point for future optimization. - **Step 4: Line Item Fetching and Failure Paths:** Line items are fetched specific to each meter. The meeting clarified the hierarchical dependency: if vendor/account matching fails, meter matching inherently fails, leading to the creation of new structures from IPS data. This results in a bill being created without a linked customer account (`client_id` set to `null`), sending it to a manual review queue. ### **Identifying Specific Matching Failures** The conversation pinpointed exact scenarios where the current logic breaks down, using a real invoice example. - **Meter-Level Matching Criteria:** Meters are matched using a combination of **commodity description**, **rate code description**, and **service address**. The example showed that while the commodity matched, the rate code failed because the IPS extraction captured "Commercial Service" while the database record contained the full string "Rate 4: Commercial Service." This exact string mismatch caused a new meter to be created. - **Issues with Extracted Data:** The **rate code** and **service address** are particularly prone to extraction variance by the LLM. The address, especially, requires "fuzzy" logic for matching, as minor differences in formatting can cause failures. The **commodity description**, while more stable, can also be misidentified. ### **Core Flaw: Disconnection Between Template and AI** A critical insight emerged regarding the fundamental design of the comparison phase (Phase 2). - **Phase 2 (Comparison) Limitations:** The system compares the IPS-extracted "enhanced JSON" with the database template. However, it was determined that the process only *overwrites* the IPS data with database values **when an exact match is already found**. The database template does not actively *guide* or *constrain* the initial LLM extraction. - **Circular Logic Problem:** Essentially, the system uses the database to clean up data only after the AI has already made its best guess. If the AI's extraction is poor (e.g., on rate code), no match occurs, and the incorrect AI data is kept, leading to new entries. The existing template is not being utilized as a prior input to make the AI's extraction more accurate in the first place. ### **Proposed Solutions and Strategic Rethinking** The conclusion was that incremental tweaks are insufficient; a new approach is needed to tightly couple the known template data with the AI processing stage. - **Feed Templates to the LLM:** The proposed solution is to dynamically feed the known invoice template for a matched vendor/account **into the LLM system prompt** *before* extraction. This would instruct the AI to use the existing line item descriptions, commodity types, and rate codes as a reference, forcing it to align its output with the known structure. - **Implement Confidence Scoring:** The enhanced system should include confidence scores from the LLM. For example, extractions with less than 90% confidence could be flagged for review, and those below a lower threshold could require mandatory manual intervention, preventing incorrect automated creation. - **Ownership and Research:** The team resolved to take full ownership of this logic. Immediate next steps involve a thorough review of the existing LLM system and user prompts, the template matching code, and the exact method by which vendor names are translated to IDs. ### **Action Plan and Administrative Notes** The session concluded with a clear action plan and ancillary discussions about tooling access. - **Immediate Technical Investigations:** The team will review three key areas: the LLM system/user prompts governing extraction, the logic for translating vendor names to IDs, and the overall design of the template matching and comparison workflow. A meeting with the original developer was also scheduled to gain historical context. - **Development and Testing Path:** Changes to the LLM prompts can be made by cloning the repository, creating a feature branch, and deploying to a development environment for testing. This allows for controlled experimentation with new prompting strategies. - **Access and Infrastructure Issues:** A significant blocking issue is the inability to access internal tools (like FTG Connect) due to VPN/client conflicts on non-company hardware. This hinders the ability to test changes and investigate issues directly, and resolving this access problem was noted as a high priority for enabling effective development.
Bill Templates Review
## Summary The meeting focused on a critical issue in the automated bill processing system where the Large Language Model (LLM) is creating duplicate line items instead of correctly matching them to existing templates. The core problem stems from a mismatch in the *commodity item description* and minor formatting differences, causing the system's exact-match logic to fail. The discussion centered on diagnosing this issue, understanding how the LLM makes classification decisions, and exploring strategic solutions to improve matching accuracy. ### Issue with Exact-Matching Logic The current system's exact-match requirement for three fields-caption, unit description, and commodity item description-is too rigid, leading to unnecessary duplication. Minor discrepancies, such as whitespace or slight wording variations, cause the system to create new line items instead of matching existing ones. - **Failed Matches Due to Minor Variations:** The primary example discussed was an "Ascension" account where the LLM classified a line item as "use build UCPR," but the manually entered template had it as "adjustment UCPR." This difference caused a complete match failure, leading to the creation of a new, redundant line item. - **Sensitivity to Formatting:** Another example highlighted how a simple whitespace difference in a caption (e.g., "Natural Gas Cost - DTHMS" vs. "Natural Gas Cost- DTHMS") was enough to prevent a match, even though the commodity item description ("commodity adjustment") was identical. - **Impact on Data Integrity:** This behavior risks creating a cluttered and inaccurate dataset with multiple line items for the same charge, which could skew consumption and cost analysis. The focus was on ensuring the *commodity item description* is correct above all else, as this directly impacts financial and usage reporting. ### Analysis of LLM Classification & Prompt Structure To address the root cause, the conversation delved into how the LLM is instructed to classify line items from invoice OCR data. The system uses a combination of a *system prompt* and a *user prompt* to guide its decisions. - **Dual-Prompt Architecture:** The *system prompt* defines the AI's persona and core identity (e.g., an "energy specialist"), while the *user prompt* provides specific, task-oriented instructions and the actual OCR data to process. - **Current Instruction Set:** The prompts contain general rules and some vendor-specific instructions (e.g., for Constellation). However, the analysis revealed that the instructions might not be sufficiently detailed or contextual to handle the nuances of every vendor's billing terminology, leading to misclassifications like "use build" vs. "adjustment." - **Inference Process:** The LLM chooses a commodity item description from a provided list of mappings. The challenge is that without precise, context-aware guidance, it can select a semantically close but technically incorrect term, which then breaks the downstream matching logic. ### Proposals for Enhancing Matching Accuracy Several solutions were proposed to make the system more robust and accurate, moving beyond brittle exact-matching. - **Implementing Fuzzy Logic:** A primary suggestion was to introduce fuzzy matching, at least for fields like captions. Pre-processing text by removing extraneous whitespace or special characters before comparison would prevent matches from failing on trivial formatting differences. - **Revising and Expanding LLM Prompts:** The need to meticulously review and enhance the prompt instructions was emphasized. This could involve: - **Adding Vendor or Commodity-Specific Rules:** Creating a database of granular instructions keyed by vendor ID or commodity type. This would allow the system to apply highly relevant rules (e.g., "for Vendor X, term 'UCPR' should always be classified as 'adjustment'") during processing. - **Structuring Prompts for Performance:** Concern was raised that a single, ever-growing prompt file might become less performant. The idea of breaking instructions into modular, conditionally applied blocks based on the bill's context (vendor, commodity) was discussed as a potential optimization. - **Leveraging Existing Account Templates:** A more fundamental solution was proposed: instead of (or in addition to) complex prompt engineering, the system could fetch the known correct template for an account and instruct the LLM to use it as a primary reference. This would dramatically reduce guesswork and ensure consistency with historically approved line items. ### Strategic Direction and Next Steps The discussion concluded with a strategic pivot towards a more template-driven approach and a plan for focused testing. - **Prioritizing Template-Based Matching:** The consensus shifted towards viewing the pre-existing account template as the single source of truth. The immediate action is to explore modifying the system to pass the known template to the LLM, guiding its classification to align with established data rather than relying solely on its interpretation. - **Targeted Investigation and Cleanup:** Before a full-scale change, a deep dive into specific failure cases (like the Ascension bill) was planned. The goal is to understand exactly why the LLM chose an incorrect description and whether the correct option was even available in its mapping list. - **Phased Implementation and Review:** The approach will be iterative. Changes would ideally be tested on a select customer account first to observe the impact in a controlled environment before broader deployment. This allows for careful review of the LLM's output against expected results.
Power BI Sync and Cadence
## Summary ### Power BI Deployment and Configuration Issues A significant technical challenge is preventing a newly created Power BI capacity from functioning correctly. The reports are inadvertently running on a shared, deprecated capacity instead of the dedicated deployment set up last week. This is causing refresh failures and performance limitations, with the system currently constrained to a 1GB memory limit. - **Incorrect Capacity Assignment:** Despite the new workspace being correctly assigned to the dedicated capacity, data refreshes are being executed on a legacy shared cluster. This misconfiguration is the core issue preventing the team from utilizing their new resources. - **Memory and Performance Constraints:** The system's current 1GB limit is insufficient for the reports, but upgrading to a higher tier (like F32) is considered cost-prohibitive. The team is actively seeking a more efficient solution. - **Lack of Internal Expertise:** A knowledge gap exists regarding the detailed configuration of Power BI's backend settings. The team finds the tool complex and requires deeper understanding to troubleshoot the capacity routing problem effectively. - **Contingency Plans:** The immediate plan is to conduct a more in-depth investigation. If the issue cannot be resolved internally, the team will examine the setup used by another business unit (UBM) and await the return of an internal expert, Sergio, for guidance. ### Team Availability and Schedule Coordination The discussion included aligning on upcoming holidays and individual availability to ensure project continuity. Key dates were identified that impact the ability to resolve the Power BI issue and plan future work. - **Upcoming Holiday Closure:** The team confirmed a holiday period spanning two specific dates in early January, during which key members will be out of office. - **Clarification of Working Days:** It was noted that the day immediately preceding the holiday is technically a working day, though some team members may be on leave. This clarification was important for planning follow-up discussions and work. ### Meeting Cadence and Ownership Transition A substantial portion of the meeting focused on re-evaluating the purpose and leadership of the recurring team call. The current structure, inherited from a past project phase, was deemed no longer optimal for the team's size and workflow. - **Evolution from Formal Agile to Ad-Hoc:** The call has already shifted from a traditional daily stand-up format to a more informal, as-needed discussion forum. This change reflects the team's smaller size and the practical need to discuss only pertinent blockers or topics. - **Proposal for Full Ownership Transfer:** A clear proposal was made for the product owner to take full ownership of the meeting. This includes the authority to change its schedule, frequency, and participant list to better suit the team's current operational needs. - **Current Facilitator's Supportive Stance:** The individual currently running the meeting expressed full support for the transition. They acknowledged their limited availability and explained that their involvement was originally a "free service" from another department that is no longer being fully utilized. They offered to remain available as an optional resource for coaching or historical context. - **Financial and Structural Considerations:** It was clarified that the scrum master services were previously charged internally, but the team has not been leveraging this structure. The discussion opened the possibility of formally discontinuing this service or seeking alternative Agile coaching support if desired in the future. ### Strategic Planning for the Upcoming Year The conversation naturally extended into preliminary thoughts about the team's operational strategy for the next year, linking the meeting structure to broader goals. - **Desire for Increased Meeting Frequency:** There is an interest in making team syncs more frequent, contingent on the availability of team members and the new meeting owner's capacity. - **Alignment with 2026 Objectives:** The restructuring of the meeting is seen as a step toward better aligning the team's daily communication with its strategic goals for the coming year. The need to understand broader organizational plans for the team in 2026 was also highlighted as important for future planning. ### Next Steps and Follow-up Actions The meeting concluded with a clear plan for immediate follow-up and a checkpoint to finalize decisions. - **Immediate Technical Investigation:** The primary action is to continue troubleshooting the Power BI capacity issue independently before escalating or waiting for external help. - **Scheduled Decision Point:** A follow-up discussion is planned for the following week to finalize the transition of meeting ownership and to define the new schedule and format. This allows time for the product owner to consider the optimal structure.
Review Ops Issues
## **Progress on Data Reparsing and Validation** The meeting opened with a discussion on the technical status of data processing systems. Validation procedures for data parsing have been successfully restored and are fully operational. A key action point was to proceed with re-parsing historical billing data for the Royal Farms client, specifically covering all records from May to the present. This task was estimated to take approximately one hour to execute. ## **Client Data Cleanup Timelines and Responsibilities** A significant portion of the discussion centered on upcoming data cleanup projects for specific clients. The immediate focus is on preparing data for a client meeting scheduled for the 5th, with a firm deadline for clean data set for the 8th. The cleanup process was slated to begin imminently, with the goal of completion by the following Monday, the 7th, accounting for team availability and coverage. The responsibility for managing this cleanup process and serving as the primary point of contact for related communications was clearly assigned. Developer representation in client meetings was discussed as a temporary measure to address ongoing platform issues until the systems are stabilized. ## **Upcoming Projects and System Adjustments** The conversation shifted to future work, highlighting an upcoming, more complex data project for a university client. This project is anticipated to involve splitting a single account bill across a large number of individual meters (e.g., 12-20), presenting a unique technical challenge. While the scope is not fully defined, it was noted that managing one complex account is preferable to handling hundreds of simpler ones. This foreshadows a need for system adjustments and development work to handle such billing structures effectively. ## **Cross-Team Communication and Issue Resolution** An operational challenge was raised regarding communication delays with an external payment processing team ("Pay"). Internal teams have reported not receiving responses for weeks on specific payment inquiries, such as issues with uncleared checks or discrepancies in remittance addresses. While one channel for payment questions has been responsive, this particular backlog indicates a breakdown that requires escalation and follow-up to resolve the outstanding issues for the operations team. ## **Strategic Alignment on Meeting Participation** A strategic discussion occurred concerning the allocation of meeting responsibilities. There was a consensus to limit developer attendance in routine client cleanup meetings in the long term. The preferred model establishes a primary point of contact for cleanup communications, while a separate individual handles broader client relations. Developer participation is seen as a temporary necessity to troubleshoot persistent platform issues but is not sustainable for ongoing operational meetings. The intent is to return to the standard communication protocol once system reliability is confirmed.
Bill Template Overhaul
## **Summary** ### **Problem Analysis: Bills Stuck Due to Template Mismatches** The core issue discussed was the failure in the document processing (DSS IPS) workflow when a bill cannot be properly matched to an existing account or template. When a match fails, the system incorrectly proceeds to create new line items, meters, or other data, which cascades into significant downstream problems. This creates non-conforming data and leads to "stuck" bills that require manual intervention to resolve. - **Root Cause: Inadequate Matching Logic**: The primary failure point is at the initial matching stage. If the system cannot match a PDF bill to an existing account number or find the correct bill template, it should not proceed with any automated creation. The current logic allows it to move forward, generating inaccurate data. - **Cascading Data Issues**: Creating new line items based on unmatched data results in entries that do not conform to what the broader system expects, causing validation failures and operational headaches. This problem extends beyond just account matching to include line-item-level mismatches on criteria like caption, commodity, and item description. ### **Proposed Solution: A Fail-Safe Workflow** The agreed-upon solution is to redesign the workflow to halt automatic processing upon any matching failure and instead flag the bill for human review within the DSS system. This prevents the creation of bad data and surfaces the specific error. - **Implement Hard Stops**: The system must be configured to stop entirely and not create any new database entries if a match for an account or template is not found. This is the critical first step to prevent data corruption. - **Introduce Descriptive Error Statuses**: Instead of a generic "in progress" status, bills that fail matching should be assigned a clear, descriptive status such as "Template Mismatch" or "Template Validation Failed." This immediately informs an operator of the nature of the problem. - **Surface Detailed Error Information**: For each failed bill, the system should display a detailed error message. This note would specify the exact cause of the failure-for example, which account field did not match or which line-item commodity was not found-providing the context needed for an operator to fix it. ### **Enabling Manual Resolution and Template Management** For bills that fail automated matching, operators need tools within the DSS interface to manually resolve the issue and instruct the system on how to process similar bills in the future. - **Manual Template Selection Interface**: A new screen or pop-up within the DSS application should allow operators to manually select the correct bill template for an unmatched account. This gives a human the ability to make the correct association that the automated logic missed. - **Training the System for Future Matches**: The solution must include a mechanism for the system to "learn" from these manual interventions. When an operator selects a template for a specific account or set of bill characteristics, DSS should remember that association and use it for automated matching in the future, gradually improving its accuracy. ### **Strategic Next Steps and Cleanup** Addressing this problem is recognized as a multi-phase initiative that requires both immediate fixes and longer-term cleanup of existing data. - **Phase 1: Investigate and Improve Matching Logic**: The first action is a focused investigation ("spike") to analyze exactly where and why the current matching logic fails. The goal is to refine the algorithms to maximize successful automatic matches before any other changes are made. - **Phase 2: Data Cleanup**: Existing line items and other data that were incorrectly created by the DSS system in the past must be identified and deleted. This cleans up the database to remove noise and inaccuracies that could hinder future processing. - **Phase 3: Leverage Existing Templates**: A principle was established to always utilize existing line items from validated bill templates wherever possible, rather than creating new ones. Special attention was noted for line items with "use cost" or similar descriptors, which have been a particular source of errors. ### **Review of Other Development Items** The latter part of the meeting included a brief review of the current development pipeline in a testing environment. Several items were confirmed as completed and ready for, or already deployed to, production. - **Confirmed Production Deployments**: Items such as the lab report feature, a prompt update for a specific vendor (Constellation), and a fix for automatic page refresh were verified as working in production. - **Team Deployment Authority**: It was confirmed that the development team has the necessary privileges to push validated changes from the preview/staging environment directly to production, streamlining the deployment process. - **Ongoing Monitoring**: The team committed to reviewing remaining tickets, particularly those related to the bill template project, to ensure they are moved forward appropriately, with an understanding that current usage patterns allow for safe deployment.
Integration Planning
## Q1 Foundation and Priorities for DSS The meeting focused on establishing the groundwork for key Data Services (DSS) initiatives planned for the first quarter. The primary objective is to move beyond daily firefighting and set a sustainable foundation for new integrations and system improvements. Two main technical efforts were prioritized for immediate action: integrating Constellation bill data and adding new third-party data sources, starting with Arcadia. A parallel, crucial initiative involves unifying and managing customer credentials across different platforms to ensure data integrity for these new integrations. ## Constellation Bills Integration Strategy Integrating Constellation bills into DSS requires careful navigation of internal access and methodology decisions. The core challenge is determining the best method to retrieve Constellation bill data. There are two internal approaches under consideration: one filters customers based on deal factors, while another aims to access *all* Constellation bills. The deal-factor method is seen as limiting because it could miss certain customer segments, like those with embedded UBM. Therefore, the preference is leaning toward the second, more inclusive option to avoid future gaps and re-work. **Access and coordination are significant hurdles.** The Constellation network has strict access controls, preventing standard external connections. This necessitates close collaboration with the internal team managing the data pipeline (referred to as "Laser Fisher") to understand how to retrieve the data within their network. The plan is to join ongoing discussions with that team to clearly define requirements for DSS, separate from any other projects (like carbon accounting) that may have caused prior confusion. ## Onboarding Third-Party Data: Arcadia and UtilityAPI The integration of new data sources, particularly Arcadia, is a major pillar of the Q1 plan, with UtilityAPI as a secondary, cost-effective option for specific vendors. **Arcadia is the preferred starting point** due to its robust documentation, high vendor coverage, and reliable webhook notifications for new bill availability. The foundation work involves setting up new services in DSS to listen to these webhooks, fetch bills and their accompanying JSON data, and map this information into the DSS structure. A Python script for normalizing Arcadia's JSON into the DSS format already exists and will be shared as a starting point. **UtilityAPI has limitations,** such as issues with multi-factor authentication (MFA) for some utilities and a less event-driven model (relying on polling). It would be used selectively for high-volume, cost-sensitive scenarios where its coverage is sufficient. The architecture for these integrations must be modular, allowing the underlying data provider to be swapped with minimal disruption to the core workflow. ## Unifying Credential Management A critical dependency for all new integrations is establishing a single, reliable source of customer login credentials for utility websites. Currently, credentials are stored separately in DSS, UBM, and even in decentralized Smartsheets, leading to discrepancies. An analysis revealed mismatches in usernames, passwords, and account numbers between systems for the same customers, which would cause failures in automated bill retrieval. **The proposed solution is to create a new, unified credential database.** This database would become the single source of truth, and all other systems (DSS and UBM) would either write to it directly or have their updates synchronized to it. **Immediate action involves manually reconciling a small batch of discrepancies** (estimated at around 20 records) to clean the initial data set. Long-term, the goal is to deprecate the use of Smartsheets for credential storage entirely and ensure any UI or process that updates credentials feeds into this centralized store to prevent future drift. ## Data Normalization and Process Architecture The approach to handling incoming data from new sources involves creating a flexible, maintainable pipeline for transformation and storage. The process must handle two parallel flows: one where a PDF is fetched and processed through DSS's existing engines, and another where the provider's native JSON is transformed directly into the DSS format using a custom normalization service. Initially, both paths may run to validate accuracy, but the long-term vision is to rely on the normalized JSON to reduce DSS processing load. **Maintaining data lineage is essential.** The plan includes storing the original provider JSON alongside the transformed DSS JSON and the final PDF. This ensures full traceability for debugging and audit purposes. Each data source (e.g., Arcadia, UtilityAPI) will require its own maintained normalization mapping code, which is anticipated to need periodic updates and troubleshooting, especially during the initial onboarding phase. ## Testing, Quality Assurance, and Rollout A phased, careful rollout is planned to avoid creating a new stream of operational issues. Before onboarding live customer data, the team plans to conduct thorough testing using existing credentials and historical bill data. This "dry run" will help identify data quality issues, assess the speed of data flow from source to DSS, and compare the output from the new normalization service against the established DSS PDF processing results. The goal is to proactively **identify vendor-specific "kinks" or coverage gaps** offered by the third-party providers. The strategy emphasizes minimizing "fire drills" post-launch by investing in upfront testing and ensuring a robust credential management system is in place first, as credential errors are a primary source of failure. ## Long-Term Vision for DSS Operations The overarching aim is to evolve DSS into a more streamlined and predictable system. The desired end-state is a clean, accountable process: for any given client, the system has a verified set of credentials, uses them to fetch bills via APIs, logs all transactions, and produces standardized outputs. This would move the team away from constant data quality cleanup and toward higher-value activities. Success is defined by having a clear, singular answer for customer onboarding status and bill retrieval health, moving away from the current situation where multiple systems provide conflicting information.
DSS Updates Review
## Customer The customer is a specialized team within an energy data services company, responsible for processing and auditing utility bills for clients. Their core function involves data acquisition, account setup, and ensuring billing data from various vendors (like propane, water, and electric suppliers) is accurately captured and formatted within their systems. They operate at the intersection of client-specific rules, complex vendor billing structures, and internal data processing platforms. Their deep expertise is evident in their handling of nuanced scenarios, such as accounts with multiple meters for different commodities (e.g., cylinder liquid vs. propane) or locations billed under a single customer ID. ## Success The most significant achievement discussed is the strategic shift towards **utilizing a centralized, definitive "build template"** as the single source of truth for processing bills. This template, historically maintained in a legacy system, contains the approved account structure, including correct meter setups, units of measure, and observation types. The success lies in the development work to integrate this template concept directly into the new DSS (Data Services System), ensuring all automated bill processing is aligned with pre-established, client-approved configurations. This directly addresses a long-standing pain point where the system was creating inconsistent and erroneous data by inferring information, leading to operational delays. The move to enforce template adherence is seen as a foundational correction that will drastically improve data consistency and reliability. ## Challenge The primary challenge stems from the **system's previous behavior of autonomously creating new meters and line items**, overriding the established templates. This resulted in a proliferation of incorrect or duplicate data (e.g., creating a new meter in gallons when the template specifies thousand gallons/Kgal). This caused bills to become "stuck" in processing due to unit mismatches and created an overwhelming cleanup burden. The team is left manually sorting through numerous active and inactive line items to find the correct one for data entry. A critical sub-challenge is the inability to permanently delete the clutter; they can only inactivate items, which still linger in the system and are sometimes incorrectly used by the automation. This has made account management cumbersome and error-prone since the new system's launch. ## Goals The conversation outlines several clear goals for improving the product and service: - **Enforce Template Supremacy:** Halt the creation of new meters by the DSS for existing accounts. The system must conform to the existing build template, and any deviations should be flagged for human review rather than automatically applied. - **Implement Effective Cleanup:** Develop a mechanism to permanently delete inactive or erroneously created line items and meters, not just inactivate them. An automated process or a user-friendly interface within DSS to manage this cleanup is desired. - **Enhance Review Capabilities:** Integrate access to prior bills and their PDFs within the DSS interface. This historical context is vital for troubleshooting processing errors, verifying account details, and understanding billing trends over time. - **Complete Note System Integration:** Ensure all layers of notes from the legacy system-especially the specific **client-vendor notes** that contain unique processing instructions for particular client and vendor combinations-are fully visible and functional within DSS. - **Ongoing Alignment:** Establish a structured, weekly sync between the development and operational teams to collaboratively plan, review progress, and address emerging issues, ensuring the product evolves in direct response to frontline needs.
DSS Daily Status
## **Introduction and Meeting Goal** The meeting was convened to establish a foundation for refactoring the Data Services (DSS) platform. The primary objective discussed was to make DSS more modular to accommodate new data sources and streamline processes, specifically for integrating Constellation Bills and a new external utility API. The session focused on identifying core tasks, understanding technical requirements, and planning the execution pathway for these integrations. ## **DSS Refactor: Core Objectives and Vision** The central goal of the initiative is to enhance the modularity of the DSS platform. This refactor aims to create a more flexible architecture that can easily integrate new data ingestion sources without disrupting existing workflows. The vision is to build a system where different data pathways can coexist, allowing for services to be added or bypassed as needed, ultimately making the system more maintainable and scalable. - **Make DSS more modular:** The refactor is intended to decouple services and create distinct pathways for processing bills from different vendors, which will simplify future integrations and maintenance. - **Accommodate new data sources:** The immediate driver for the modularity work is the need to seamlessly onboard data from Constellation Bills and a new external utility API, moving beyond the current web download methods. - **Maintain existing functionality:** A key principle is that the new architecture must not break or require major changes to the existing flows for current data sources, ensuring stability during the transition. ## **Constellation Bills Integration Strategy** A significant portion of the discussion was dedicated to planning the integration of bills from Constellation. The approach involves fetching XML and PDF data via an official API and mapping it into the existing DSS data structure. - **Data acquisition via API:** The plan is to pull XML and PDF bill data directly from Constellation's API, moving away from manual downloads. This requires understanding the API's authentication, endpoints, and the schedule for bill availability. - **Critical unknowns to resolve:** Several key questions were identified that need answers from the vendor or internal teams, including: the specific account identifiers needed for API requests, how billing cycles vary per client, and the exact timing for when bill data becomes available through the API. - **Processing and mapping flow:** Once retrieved, the XML data will need to be transformed and mapped to the existing DSS schema. The PDFs will be stored in blob storage. After this mapping and storage step, the existing DSS validation and processing flows would be utilized. - **Delegation and existing work:** It was noted that preliminary mapping work between Constellation's XML and the DSS structure has already been explored by another team member. This work can be leveraged and potentially delegated to other developers to accelerate progress, with a focus on code review and guidance. ## **External Utility API Integration Details** The integration of a new external utility API presents a similar but distinct challenge. This integration will be triggered via webhooks and involves processing JSON data extracts alongside PDFs. - **Webhook-driven initiation:** Unlike scheduled API calls, this integration will rely on webhook notifications to signal when new bill data is available, prompting the system to fetch the relevant information. - **Authentication and client scope:** Access to this external API will require managing credentials, such as API tokens. The rollout will be phased, starting with a limited set of bill-pay customers to validate the integration before scaling. - **Data transformation task:** The core technical task is to map the JSON output from this external API to the standard DSS data structure, similar to the XML mapping required for Constellation. - **Coexistence with current system:** A crucial point was that bills from this new source will still need to create bill records within the DSS and related legacy systems initially, as the platform is not yet ready to fully decouple from them. ## **Refactoring Approach and Technical Implementation** The conversation delved into the technical strategy for implementing these new services within DSS, clarifying that the "refactor" might be more incremental than initially envisioned. - **Services within DSS:** The agreed approach is to build the new API-fetching and data-mapping services as new components *inside* the existing DSS application, rather than as completely external systems. - **Leveraging existing flows:** The intention is for these new services to hand off data to the established DSS processing, validation, and storage flows after the initial source-specific mapping is complete. This minimizes changes to the core, tested logic. - **Re-evaluating refactor scope:** There was a realization that the foundational "refactoring" work needed to enable modular pathways might be less extensive than previously thought. The current plan is to proceed by building the new integration services first, and the necessary architectural adjustments will become clearer and can be addressed during that implementation. - **Retry mechanisms:** For both new integrations, robust retry logic will be necessary to handle cases where a bill is not immediately available via the API, ensuring no data is missed. ## **Ancillary Priority: Template Cleanup and Enforcement** Near the end of the meeting, a separate but important operational issue was raised concerning the management of templates within the system. - **Cleaning existing templates:** This was highlighted as a high-priority task because the proliferation of unused or poorly defined templates is directly contributing to increased operational costs and complexity. - **Enforcing template standards:** Alongside cleanup, there is a need to investigate how to more rigidly enforce template creation standards going forward. This would provide better governance and clarity on which parts of the system are generating specific templates. - **Understanding system impact:** The underlying goal is to gain a clearer understanding of system resource usage and costs by having a clean and well-documented set of templates.
DSS Daily Status
UBM Alignment Meeting
## Customer The customer is a long-term client, onboarded approximately in August of the previous year, who utilizes the platform for critical energy data management and bill processing. Their operations involve handling complex utility data across multiple sites and vendors. They have experienced a prolonged period of instability with the product, leading to significant manual effort to clean and validate data. The relationship has been strained by frequent changes in their main points of contact within the service team, contributing to a sense of discontinuity and frustration. ## Success The most significant recent achievement has been the restoration of confidence through transparent communication and a committed roadmap for systemic improvement. The customer has explicitly acknowledged and appreciated the renewed effort to address long-standing issues. A key indicator of this restored trust is their strong desire for a dedicated, consistent point of contact from the product leadership team to ensure continuity and direct escalation. They are supportive of the strategic shift from reactive, one-off fixes to solving foundational platform problems that will benefit all users. ## Challenge The primary and most pressing challenge is the historical lack of accurate data within the system, which has forced the customer's team to engage in extensive, unsustainable manual cleanup for over a year. This core issue of data inaccuracy erodes trust and halts business growth, as evidenced by their hesitation to onboard new clients due to low confidence in the platform's output. Compounding this is frustration over perceived misalignment between the original sales promises and the product's delivered capabilities, as well as internal team turnover that has disrupted the partnership and communication flow. ## Goals The customer's objectives are centered on achieving stability, accuracy, and scalability. Their key goals include: - **Data Integrity and Automation:** Eliminating manual correction work by ensuring the platform delivers accurate, reliable data from the outset through improved validation and normalization processes. - **Systemic Resolution:** Moving beyond symptom-level fixes to address the root causes of data issues, thereby preventing a continuous cycle of new problems. - **Reliable Partnership:** Establishing a stable and expert point of contact to provide consistency in strategic communication and issue resolution. - **Confident Expansion:** Regaining the ability to securely and confidently onboard new clients and sites using the platform, enabling business growth. - **Clear Product Alignment:** Gaining a clear and updated understanding of the product's roadmap and capabilities to ensure future expectations are properly set and met.
[EXTERNAL]Updated invitation: Constellation/Arcadia Plug Intro @ Tue Dec 23, 2025 2pm - 2:30pm (EST) (faisal.alahmadi@constellation.com)
## Customer The customer is a company implementing an energy data aggregation and automation platform (referred to as PluG) to streamline utility data collection for their commercial clients. Their background involves managing numerous commercial utility accounts across a wide range of providers, including large investor-owned utilities and smaller municipal providers. Their core operational workflow centers on integrating the platform's API to automatically pull utility statements and interval data, which is then parsed and stored in their internal database for analysis and reporting. The implementation involves cross-functional teams, including a project lead for the API integration and operational teams who will be primary users of the administrative dashboard. ## Success The most significant success anticipated from using the product is the complete automation of a previously manual and cumbersome data collection process. The platform enables the automatic, recurring retrieval of PDF utility statements and granular interval consumption data directly from utility provider portals once credentials are submitted. This eliminates the need for manual logins and downloads, saving substantial time and reducing human error. A major value-add is the system's ability to automatically pull up to 12 months of historical data for newly added accounts and the flexibility to request even deeper history on an ad-hoc basis. Furthermore, the product's design allows for efficient scaling, as credentials can be added individually through a dashboard, in bulk via CSV upload, or programmatically through the API, fitting seamlessly into the customer's planned automated workflows. ## Challenge The primary challenge identified is handling utility providers that are not currently supported by the platform's extensive but finite library. The customer's portfolio includes accounts with obscure municipal providers where they may have only one or few accounts, making it uncertain whether these providers will be prioritized for integration. This creates a potential gap in full automation, possibly requiring a separate, manual process to collect bills from these unsupported providers. Additionally, while the platform indicates at the provider level whether credential-based access is generally possible (`true`/`false`), the customer must still verify interval data availability at the individual meter level after enrollment, adding a step to their workflow. There is also some need for clarification on mapping their internal data structures to the API's response schema and understanding the full scope of available API monitoring and error logging capabilities. ## Goals The customer's key goals for using the platform are multi-faceted: - To fully automate the collection of monthly utility statements and interval data for the majority of their commercial client accounts. - To successfully integrate the platform's API into their own systems, creating a seamless, job-driven pipeline for requesting and receiving data. - To achieve comprehensive coverage by also developing a process for onboarding accounts from utility providers not currently supported by the platform, either through new template development or manual workarounds. - To utilize the platform for both standard, recurring data pulls and for ad-hoc requests for additional historical data or specific interval data. - To empower their internal operations and customer relationship teams with administrative access to the dashboard for visibility into data collection status and metrics.
DSS Updates Review
## Summary ### API and Edit Functionality Development The discussion centered on the current state of an edit feature within an application, which is currently implemented as a read-only interface. This is a temporary state, as the necessary backend APIs for dynamic data fetching and updates have been identified but not yet fully integrated. The immediate focus is on stabilizing core functionality before enhancing the user interface with dynamic dropdowns and other interactive elements. - **Current read-only edit interface:** The feature displays data fields but does not allow user modifications. Key non-editable fields include IDs and specific line items, which are considered static reference data. - **Prepared backend for updates:** An update API endpoint has been created and is undergoing local testing to determine the correct data payload required for successful operations. - **Planned integration of supporting APIs:** Future work involves calling additional APIs, such as a Service Account API, to populate dynamic selection fields (e.g., for item types) and enable full editing capabilities. ### Template View and Data Display Issues Progress was reported on a template view page designed to display detailed file information. However, a recent code change introduced a regression that broke the automatic retrieval and display of key identifiers, such as Vendor ID and Client ID. - **Functionality regression:** A previously working feature that auto-populated vendor, client, and account data on the template view page is currently non-functional due to accidental code removal or alteration. - **Prioritization of core fixes:** The immediate plan is to restore this core data display functionality before moving on to further enhancements or refinements of the page. ### Deployment Pipeline and Environment Status The team reviewed the deployment status of various changes across development, test, and production environments. A general directive was given to promote any completed and tested work to production to prevent backlog accumulation in lower environments. - **Limited production deployments:** Most recent changes, including those for the discussed edit functionality and other tickets, are currently only deployed in the test environment. - **Specific updates in test:** Recent test deployments include fixes for two FTC Connect tickets-one related to BD enrollment and another concerning page reload behavior-as well as changes related to caching mechanisms. - **Push to production:** There is a clear intention to deploy any qualified changes from the test environment to production promptly to keep environments synchronized and deliver finished work. ### Administrative and License Assignment An administrative action was taken to resolve a software licensing issue, ensuring team members have the necessary access levels for data visualization and reporting tools. - **Power BI license upgrade:** A team member was assigned a Power BI Premium per User license to address access needs, supplementing an existing Pro license. - **Dual license coverage:** The user was intentionally kept assigned to both license tiers temporarily to guarantee uninterrupted access and full functionality while the new license assignment is validated.
Overdue Payments Push
## Summary The meeting primarily focused on reviewing urgent operational priorities, particularly clearing past-due invoices for key clients, addressing technical API alignment issues, and managing vendor performance concerns. The discussion emphasized rapid escalation and resolution of payment processing bottlenecks to prevent client relationship issues. ### Urgent Invoice Processing and Emergency Payments Immediate action is required to process a backlog of past-due invoices for major clients, with several flagged for emergency payments to avoid late fees and service disruptions. - **Victor and Extension backlog clearance:** There is a significant queue of past-due invoices for clients Victor and Extension, with many due dates already passed. For Victor, 145 bills are pending, with 28 past due and 64 due within 10 days. A similar situation exists for Extension, with 17 past due and 74 due soon. The team plans to manually review and initiate emergency electronic payments for applicable charges this morning to ensure same-day processing. - **Critical clients requiring special attention:** Specific high-priority clients like PNB and Royal Farms were highlighted due to their strict policies and propensity to impose heavy late fees. For instance, an electric bill for PNB was identified as needing immediate emergency payment to prevent service shut-off. - **Process for handling discrepancies:** A method for filtering and reviewing invoice discrepancies was discussed, particularly for "total charges mismatch" errors. The approach involves filtering out discrepancies greater than $1,000 for manual review, as smaller amounts are often related to credits or minor adjustments from previous bills. ### Technical Adjustments and API Integration Technical misalignments in data pipelines and ongoing work to enforce system templates were reviewed to reduce recurring processing errors. - **PayCleary API date field alignment:** A misalignment was identified where incorrect dates are being pulled from the PayCleary API for certain fields, such as virtual credit card interchange information. The core issue is ensuring the system captures the correct "submitted date" to align with receipt dates, which is crucial for accurate processing. - **Enforcing account templates to fix recurring errors:** To address persistent errors like unsupported "Units of Measure" (UoM) and mapping issues, the team is moving away from one-off fixes. The new strategy involves enforcing standardized bill templates for each account, which is expected to systematically resolve these common slipping points. ### Vendor Management and SLA Challenges Performance issues with a key payment vendor, Ascension, were discussed, focusing on their inability to meet SLAs despite claims of sufficient capacity. - **Escalating unresolved check payments:** There is a list of issued checks for Ascension that are still outstanding and available for cancellation and re-issue electronically via the vendor portal. Despite the vendor stating the checks are "unavailable," internal review shows the electronic payment option is often still viable, requiring direct intervention to reprocess. - **Addressing chronic SLA failures:** Although Ascension claims to have adequate staff and can handle increased volume, they consistently fail to meet SLAs for escalated, due payments. A communication gap exists where the team's willingness to pay extra for guaranteed priority processing is not being actioned by the vendor. A direct call was scheduled to reiterate this priority and resolve the disconnect. ### Internal Workflow and Resource Planning The team coordinated on internal task handovers and reviewed the status of a major client onboarding project. - **Handover for upcoming time off:** With team members taking time off, a brief handover was arranged to continue the daily review and approval of internal change requests, such as mapping and UoM adjustments, to avoid workflow disruption. - **Follow-up on Ascension account mapping:** An ongoing project with client Ascension involves mapping a large number of accounts. While progress has been made (from 100 to 43 remaining), the final list of 43 accounts has not yet been received back. Additionally, a separate review of 280 "arriving bills" was noted as pending action.
Long Term Plan - Payment Process Improvement Discussion
## Summary The meeting focused on resolving a critical billing and payment issue with Google Cloud that recently led to a service interruption. The core problem was a disconnect between Google's automated invoice delivery and the company's automated invoice processing system, compounded by inadequate internal oversight of the cloud spending contract. The discussion centered on diagnosing the root causes, implementing immediate manual safeguards, and defining a path toward a fully automated, reliable process. ### Root Cause Analysis: Why the Service Was Interrupted The service suspension occurred because invoices were not being paid on time, but the underlying issue was a failure in communication between the two systems. Google's invoices were being sent to an unmonitored, automated email address (`invoice@constellation.com`) that could not be verified by Google's system, causing a breakdown in the official distribution channel. Furthermore, the internal team responsible for administering the cloud contract and its funding in the Asset Suite 9 system was not properly set up or actively monitoring the contract's balance, so no one was alerted when the funding was depleted or when invoices were rejected. - **Unverifiable Invoice Email Address:** The primary technical hurdle is that Google requires a verified email address to send invoices, but the company's `invoice@` address is an unmonitored mailbox used solely for automated processing. It cannot complete a manual verification step, so invoices were not being formally delivered through Google's system. - **Lack of Internal Contract Oversight:** Administratively, the correct personnel (Sunny and Faisal) did not have access to the Asset Suite 9 system where the purchase order (PO) and its funding are managed. Without this access, no one was tracking the contract's consumption or could react when the PO ran out of funds, which caused new invoices to be rejected by the internal system even if they were received. ### Interim Solution: Manual Processes for Immediate Reliability To prevent another interruption while a permanent fix is developed, the team agreed on a set of manual steps to ensure invoices are seen and paid. The key is to leverage Google's capability to send invoice notifications and copies to individual, verified human administrators. - **Updating Google Cloud Payment Contacts:** Faisal confirmed he has billing administrator privileges in the Google Cloud console. He updated the payment profile to include himself and Sunny, ensuring both will receive direct email notifications for all future invoices and any critical account alerts (e.g., past-due notices). - **Manual Invoice Submission:** As a temporary workflow, Faisal will take responsibility for receiving the invoice notifications from Google and manually forwarding each invoice PDF to the `invoice@` address so it can be ingested into the Oracle payment system. This ensures payments are initiated even though the automated channel is broken. - **Validating Invoice Data:** The team confirmed that recent Google invoices do contain the necessary PO number, which is crucial for the internal system to match the invoice to the correct contract and approve it for payment. This was verified during the call by checking a sample invoice. ### Path to Full Automation: Verifying the Automated Email Address The long-term goal is to restore the fully automated handoff from Google's billing system to the company's Oracle system, eliminating manual touchpoints. The sole obstacle is the verification of the automated `invoice@` email address. - **Google-Side Force Verification:** The Google account representative is actively pursuing a "force verification" of the `invoice@` address through internal collections and billing support teams. If successful, Google's invoices would flow directly to the automated mailbox without requiring manual intervention from the company's side. - **Contingency Planning:** If force verification is not possible, the company would need to explore technical solutions, such as having the IT team temporarily configure access to the `invoice@` mailbox to complete the verification, though this was noted as a less ideal path. ### Ongoing Contract and Financial Management Beyond invoice delivery, the meeting highlighted the need for proactive financial management of the cloud contract to avoid future "insufficient funds" scenarios, even if invoices are being paid. - **Ensuring Proper Contract Administration:** It was stressed that Sunny and/or Faisal must be granted administrator access to Asset Suite 9. This access is required to monitor the contract's remaining value, approve increases in funding, and receive system alerts if invoices are rejected due to an exhausted budget. - **Proactive Funding Monitoring:** The team acknowledged that the contract's annual funding estimate may not match actual usage. Without active monitoring, a surge in cloud usage could deplete the PO faster than anticipated, leading to invoice rejections and potential service disruption, even if the invoice delivery problem is solved. This requires a separate, ongoing oversight process.
Review URA Cleanup
## Summary The meeting focused entirely on presenting a technical solution for cleaning a specific data column that contained inconsistent formatting. The core of the discussion was a step-by-step walkthrough of an automated method to extract clean date information from the problematic source data. ### The Data Cleaning Solution A specific formula-based approach was identified to efficiently clean the target column and extract usable date information. - **Using the `TEXTBEFORE` Function:** The proposed solution involves creating a new column and using a formula, specifically `=--TEXTBEFORE(...)`, to parse the original cell content. The formula is designed to remove all characters that appear after a specific delimiter (a space and another character), effectively isolating the relevant date portion that precedes it. - **Handling the Numerical Output:** After applying the formula, the result may initially appear as a raw serial number (e.g., 45370). This number is correctly identified as the date's underlying numerical representation within the spreadsheet software and simply needs to be reformatted. - **Final Formatting for Presentation:** The final step to achieve a human-readable date is to change the cell format of the new column to a "Short Date" format. This instantly converts the serial number into a standard date display (e.g., 03/14/2024), resulting in a clean, consistent column ready for customer-facing reports or further analysis. This method was presented as a means to significantly automate the data preparation workflow, ensuring more reliable and presentable data for end-users.
Report
## **Overview of the Report and its Purpose** The meeting centered on analyzing and improving a daily financial report used to track client payment transactions. This report, sourced from the PayClearly system, is the de facto system of truth for monitoring how many bills are paid on behalf of clients via various methods-specifically ACH, check, and virtual card (V-card). The primary objective of this analysis is to accurately answer recurring client inquiries about their monthly payment activity, broken down by payment method and associated dollar values. - **Purpose and Client Context:** The report is critical for providing clients with transparent, monthly summaries of payment volumes and values. Clients frequently ask for these figures to reconcile accounts and understand transaction flows, particularly concerning checks, which can be a pain point due to potential delays and associated late fees. - **Current Methodology:** Historically, the team has been generating these summaries by filtering the data based on the invoice's "Due Month." This method was established to fulfill client requests but has inherent flaws, as it doesn't accurately reflect when the payment processing work was actually performed. ## **Identification of a Critical Flaw in Current Reporting** A significant flaw in the existing reporting logic was identified: using the "Due Month" as the primary filter does not accurately represent the company's actual processing activity within a given month. - **The Core Issue:** The "Due Month" on an invoice does not correlate to when the payment was processed. Invoices due in one month may be processed and paid in a subsequent month, especially when catching up on backlog or handling early payments for future dues. Relying on due date leads to misrepresenting monthly activity counts and financial figures reported to the client. - **Impact on Data Accuracy:** For example, showing check transactions for the month of June based on due date is misleading if those checks were actually processed and issued in July or August. This discrepancy means clients are not receiving a true picture of the work completed in a specific period. ## **Proposed Shift to a Process-Centric Date Metric** The discussion concluded that the reporting must shift from using "Due Month" to a date metric that reflects when the payment was actually processed to provide an accurate picture of operational activity. - **New Target Metric:** The consensus was to use the date a payment file was created (the "Created" date timestamp in the report) as the correct filter for monthly summaries. This aligns the reported numbers with the actual work performed by the team in that calendar month, irrespective of the original invoice due date. - **Inclusion of "Tracking" Status:** To fully capture all activity, it was agreed that payments in both "Processing" and "Tracking" statuses should be counted. "Tracking" represents work that has been initiated and is awaiting bank confirmation, which then flips to "Processing"-thus, both statuses constitute completed operational effort. ## **Technical Hurdle: Data Format and Excel Manipulation** A major technical obstacle to implementing this solution is the non-standard format of the "Created" date field in the raw PayClearly report. - **Format Problem:** The "Created" field contains a timestamp in a string format that Excel does not automatically recognize as a date (e.g., "20231115 06:28:00" without standard date separators). This prevents the direct use of Excel's pivot table and date filtering functions to roll up transactions by month. - **Explored Solutions:** Potential solutions discussed involve data cleaning within Excel, such as: - Using text functions (e.g., `LEFT`) to extract just the date portion of the string into a new column. - Attempting to force a format change via Excel's built-in date formatting tools, though initial attempts were unsuccessful with this specific data structure. - Creating a helper column with a cleaned date that can be used as a proper field for monthly aggregation in pivot tables. ## **Strategic Implications and Path Forward** The conversation underscored the importance of accurate reporting for client trust and internal clarity, while acknowledging the need for a practical, if temporary, solution. - **Client Communication and Caveats:** It was noted that continuing with the "Due Month" method is acceptable only if the inherent inaccuracy is clearly understood and communicated as a caveat to clients. However, the goal is to move away from this acknowledged error to a more precise reporting standard. - **Manual Work and Next Steps:** Until a technical fix for the date formatting is implemented, the team will continue to manually compile the monthly figures-a labor-intensive process that involves sorting and counting transactions based on the "Created" date. The immediate next step is to develop a reliable method to parse the PayClearly timestamp into a usable Excel date format to enable automated, accurate monthly summaries.
UBM 2025 Review & 2026 Technology Dev Update
## Summary The meeting centered on a critical review of the operational challenges faced throughout 2025 and the strategic roadmap for 2026, with a core emphasis on shifting from reactive problem-solving to proactive, continuous improvement. ### Opening Philosophy: From Decision to Action The discussion was framed by a metaphor highlighting the difference between deciding to act and actually executing, setting the tone for the entire meeting. The key takeaway was that identifying problems is not enough; the team must be empowered and expected to bring forward potential solutions alongside the challenges to drive meaningful change and reduce daily "fire drills." ### Review of 2025: A Year of Reactive Challenges The past year was characterized by significant growth that exposed systemic weaknesses, forcing the team into a reactive mode to maintain operations. Several major pain points were successfully addressed but often as band-aid solutions. - **AI Implementation for Bill Ingestion:** OpenAI was integrated to help extract information from bills and push them through the ingestion process, though this required untangling legacy data structures. - **Application and UI/UX Improvements:** Substantial work was done to adapt the application's build templates and user interface to handle variations across clients, vendors, and service types, ensuring data flowed correctly from upstream systems. - **Focus on Data Accuracy and Guardrails:** A major effort was initiated to design, implement, and test data validation guardrails, aiming to prevent the same errors from recurring and consuming development resources on repetitive hot fixes. - **Uncovered Systemic Complexities:** The reactive work revealed deep interdependencies between multiple subsystems (e.g., UDM, DSS, FPG), which often operate like separate platforms. This complexity, compounded by legacy, on-premise systems incongruent with cloud applications, created scalability limits and significant compliance overhead. ### Strategic Direction for 2026: Optimization and Empowerment The vision for the coming year is to build on the lessons of 2025 by optimizing foundational processes, thereby freeing up capacity to work on value-added features rather than just operational table stakes. - **Shift from Reactive to Proactive:** The goal is to move beyond firefighting by better planning for known challenges, documenting "the new normal" for workflows, and understanding that not every issue has a technological solution-some will require updated operational processes. - **Enhance Customer-Centricity Internally:** While clients only see the final output, internal teams must adopt the customer's perspective to streamline the end-to-end process, from timely bill download to payment. - **Centralize and Systematize Knowledge:** A critical initiative is to codify fragmented institutional knowledge into living, version-controlled documentation to eliminate ambiguity and accelerate onboarding and problem-solving. ### Key Initiatives for the 2026 Roadmap Specific projects were outlined to address the core challenges identified, focusing on data integrity, self-service, and streamlined client onboarding. - **Doubling Down on Data Accuracy:** The focus will be on strengthening guardrails using historical human-validated data as a foundation, rather than relying solely on AI interpretation, and creating better "interpreters" between systems like DSS and UDM to align data contexts. - **Self-Service Data & Reporting:** To stop the bottleneck of creating custom reports, a flexible API will be provided, allowing clients to access their data directly. This will be followed by a drag-and-drop report builder, empowering customers and freeing the development team from endless custom report requests. - **Streamlining and Formalizing Onboarding:** A major push will involve implementing tools like automated Gantt charts synced with CRM data to visualize timelines and dependencies. A formal "wizard" for AP file mapping will require clients to sign off on a specific data format early, with any subsequent changes triggering a formal change control process to manage expectations and scope creep. ### Process Evolution: Managing Requests and Priorities New mechanisms will be established to handle the influx of ideas and issues from both internal teams and clients in a transparent and prioritized manner. - **Structured Intake Channels:** All feedback-bug reports, feature requests from Client Success Managers (CSMs), internal operational needs, and sales input-will be funneled into centralized systems (HubSpot for clients, dedicated forms/channels for internal teams) instead of disparate emails and chat messages. - **Transparent Prioritization Framework:** A clear process will be developed to assess, prioritize, and communicate the status of requests. This includes defining severity levels (e.g., P0 for critical bugs) and ensuring all teams understand how priorities are set based on impact to the platform, internal processes, and clients. - **Setting Firm Client Expectations:** The team will adopt an enterprise-standard approach where requirements agreed upon during onboarding are binding, and late changes are formally documented and assessed for impact on timeline and resources, protecting team bandwidth. ### Highlights from Q&A The open forum addressed specific operational concerns and communication plans. - **Standardization of Time Zones:** The lack of a standard time zone reference across international systems and vendor cutoffs (like UTC-based payment files) was acknowledged as a significant pain point. The solution will involve clarifying which systems use which standards and investigating the possibility of more flexible, on-demand data refreshes where feasible. - **Internal and Customer-Facing Communications:** A regular release schedule is planned, complemented by short internal training videos (e.g., via Loom) for new features. These resources can then be adapted by CSMs and sales for client-facing discussions, improving stickiness and demonstrating ongoing development.
UBM Planning
## Summary The meeting centered on addressing a critical technical issue involving duplicate data imports within a system referred to as UBM, which is consuming excessive operational resources and affecting a specific client. ### The Core Technical Issue A significant technical malfunction was identified where a data import process created duplicate entries from a single source file. The discussion focused on the symptoms and immediate observations of this failure. - **Duplicate imports of a single file**: The system erroneously processed the same received file multiple times, leading to redundant data entries. This was confirmed as the primary symptom, narrowing the investigation away from the possibility of multiple source files being sent. - **Abnormally long processing time**: One of the file processing jobs was reported to have taken one hour to complete, which was flagged as an excessively long duration and a potential indicator of an underlying system performance or logic error. ### Operational Impact and Urgency The issue was framed as a high-priority problem due to its direct negative impact on daily business operations and client relations. - **Diversion of critical resources**: The problem is consuming a substantial amount of time for operational staff, who need to focus their efforts on core activities like processing and paying bills to maintain workflow momentum. - **Exacerbation with a key client**: The problem has manifested specifically for a client named Victor, who is already identified as a "problem client." Ensuring system reliability for this client is a particular concern to prevent further complications. ### Investigation Status and Next Steps The conversation concluded with an acknowledgment that the root cause was not yet determined and a plan for collaborative resolution. - **Undetermined root cause**: The team explicitly stated that the precise source of the bug or system failure had not been identified during the meeting, confirming the issue was still in the diagnostic phase. - **Commitment to a team-based solution**: It was agreed that solving this problem requires a coordinated team effort to delve deeper into the system's behavior. A follow-up plan was suggested for a team member to provide a recorded audio summary of any conclusions or findings after further investigation.
Daily Priorities
## Customer The customer operates in a data-intensive operational environment where managing and reconciling billing and account information from multiple systems is a core function. Their role involves ensuring the accuracy and consistency of customer account data across various platforms, including UBM for lab reports and a centralized Data Services repository. The background indicates a significant reliance on manual reporting processes, cross-referencing data from disparate sources like "late alive bills" and billing accounts reports to maintain a coherent view of customer statuses, particularly for critical operational reviews and customer success management. ## Success The most significant achievement has been the establishment and adoption of a single, authoritative "source of truth" for all account data. This centralization, primarily within the Data Services master accounts file, has successfully replaced a chaotic landscape of numerous conflicting reports. This consolidated source is now being leveraged for multiple critical use cases, fundamentally shifting the workflow from one of constant reconciliation and doubt to a more structured and reliable starting point for all downstream analysis and reporting. This move has provided a much-needed foundation for consistency and has begun to reduce the time spent arbitrating between different data sources. ## Challenge The primary ongoing challenge is the persistent issue of data fragmentation and duplication, especially concerning account records. A significant pain point is the misalignment between systems; for instance, accounts marked as "closed" in one platform may continue to generate invoices or appear in active reports elsewhere. This necessitates tedious manual cleanup efforts that are often reactive rather than proactive. Furthermore, the process of generating necessary reports, such as weekly summaries for customer meetings, remains largely manual. This creates a bottleneck, consumes considerable time, and introduces the risk of errors as individuals must manually "slice and dice" data from the central source to meet specific, evolving requirements. ## Goals - To fully automate the generation of key reports, particularly the weekly summary for customer reviews, by codifying the existing manual steps into a reliable, scheduled process. - To achieve a complete and reliable merge between the lab report data and the central billing account data, eliminating the need for manual comparison and duplicate hunting. - To enhance data quality and proactive error detection by enabling all system validations for a targeted set of key customers, which will help catch billing inaccuracies like usage spikes or incorrect decimal placements. - To implement a sustainable process for maintaining data hygiene, ensuring that once accounts are cleaned up or closed, this status is consistently reflected across all systems to prevent future discrepancies. - To evolve the centralized reporting into an actionable tool that not only provides data but also helps teams manage customer issues and track progress over time.
DSS Matching Logic
## Core Issue: Build Template System and Data Enrichment in DSS The central challenge discussed is the rigidity and drift within the Data Services System's (DSS) build template system, which is used to structure and enrich invoice data. The legacy system's dependency on rigid templates creates downstream problems in Utility Bill Management (UBM), particularly the automatic creation of new virtual accounts and templates when incoming invoice data doesn't perfectly match historical definitions. The meeting explores strategies to refine the template matching logic, enforce data integrity on critical fields, and execute a systematic cleanup of accumulated template data. ### The Problem with Current Build Templates The existing system for matching invoices to predefined build templates is overly rigid and prone to creating duplicate data, leading to operational inefficiencies. - **Rigid Matching Logic:** The current system requires exact matches on five key fields (like service address and rate code) to map an invoice to a historical template. Slight variations (e.g., "GS" vs. "General Services" for a rate code) cause a match failure, triggering the creation of a new, often redundant, template and virtual account in UBM. - **Template Proliferation:** This matching failure mechanism has likely led to the creation of a large number of "one-off" templates. This proliferation becomes unsupportable and creates significant manual cleanup work downstream, as operations teams must reconcile the mismatched data in UBM. - **Downstream Impact in UBM:** The core business impact is felt in UBM, where the creation of new virtual accounts due to template mismatches results in mapping errors and requires extensive manual intervention from operators to correct. ### Proposed Solution: Refined Template Matching & Enforcement The proposed strategy involves loosening the criteria for template *matching* while tightening control over the *enforcement* of critical data fields within a matched template. - **Simplify Matching Criteria:** The suggestion is to distill the matching logic down to three core, stable identifiers: **Client, Vendor, and Account Number**. This broader matching approach should capture more invoices correctly under the appropriate historical template, reducing the creation of new ones. - **Enforce Critical Template Fields:** Once a template is matched, certain fields within it must be enforced as immutable to ensure data consistency. The **observation/service type for usage and cost data** (e.g., "use cost") is considered non-negotiable, as changing it affects billing calculations. Other fields, like descriptive captions for line items (e.g., "State Tax"), can be more flexible and updated based on the invoice content. - **Utilize LLM Context More Effectively:** The Large Language Model (LLM) used in the Intelligent Processing System (IPS) could be better leveraged. It should be provided with a much narrower, more relevant set of historical data (filtered by client/vendor/account) to make its enrichment decisions, rather than a broad set of all client data. ### The Critical Need for Historical Data Cleanup Cleaning up the existing corpus of historical templates and line item definitions is identified as a prerequisite for improving future automated processing. - **Foundation for LLM Accuracy:** The historical data serves as the primary reference for the LLM during the data enrichment step. Inconsistent, duplicate, or "noisy" historical data directly leads to poor LLM predictions and perpetuates downstream errors. - **Targeted Cleanup Activities:** An audit is needed to identify and merge duplicate templates (e.g., multiple "propane" line items) and eliminate inactive or erroneous service item descriptions. Reducing the problem space for the LLM by cleaning the reference data will increase processing accuracy. - **Progressive Filtering Strategy:** The system currently uses a progressive fallback method for matching: it first tries to match on the most specific criteria (Commodity + Vendor + Account), then broadens to Vendor + Commodity if needed. The goal is to strengthen the initial, most specific match by ensuring the underlying data is clean. ### Technical Implementation and System Flow The discussion clarified the two-step processing flow within DSS and how the template system integrates. - **Step 1 - LLM Initial Classification:** The first step involves the LLM analyzing the raw OCR output from an invoice to extract key entities like client, vendor, account number, and commodity type. This step establishes the initial context without relying heavily on historical templates. - **Step 2 - Historical Data Enrichment:** The second step uses the identifiers from Step 1 to query for relevant historical templates and line items. The system then attempts to align the LLM's extracted line items with the historical patterns. The success of this step hinges on the synergy between the textual description of line items on the new invoice and how they were described in the historical data. - **Current Data Scope is Too Broad:** A key finding is that the system may be sending too much historical data to the LLM for context. Instead of sending all service items for a client, it should send only those relevant to the specific **client-vendor-account (and commodity)** combination identified in Step 1, making the LLM's task more precise. ### Next Steps and Strategic Considerations The meeting concluded with specific investigative and tactical actions to address the outlined problems. - **Quantify the Problem:** The first action is to analyze the database to determine how many new templates and line items have been created by the automated system over the past several months. This data will reveal the scale of the template proliferation issue. - **Implement Refined Data Filtering:** The technical team will adjust the "Step 2" enrichment API call to filter historical data based on the specific client-vendor-account combination, providing a more targeted context to the LLM and improving match accuracy. - **Plan a Systematic Cleanup:** A strategy for programmatically identifying and merging duplicate or erroneous templates and line item descriptions needs to be developed. This cleanup is essential before the system can safely begin learning from more recent data. - **Re-evaluate the Historical Data "Freeze":** The system currently only uses historical data created before a specific "go-live" cutoff date (e.g., June 30, 2025) to avoid learning from its own mistakes. Once the data is cleaned, a cautious, client-by-client approach to updating this historical data window can be considered to allow the system to adapt to legitimate new patterns.
UBM Q1 Roadmap
## Summary of the Meeting ### **Critical Client Issue: University Billing Data** The discussion opened with a review of a significant ongoing issue for a major university client. The core problem is that the client's utility usage data is not being correctly split and displayed in the platform, stemming from a legacy billing setup where multiple campus buildings are consolidated under a single bill. This has led to a year of frustration for the client, as promised data views have not materialized. The team acknowledged this is not a system bug but a fundamental gap in processing "summary bills." The immediate challenge involves manually correcting extensive historical data across many buildings, but the larger concern is ensuring a sustainable, accurate process moving forward. This case raised a red flag about the potential for other clients facing similar unmet expectations from sales promises. ### **Review of Recent Development & System Challenges** The conversation shifted to preparing for an internal review, focusing on development accomplishments and persistent systemic challenges. Key achievements from the recent period were highlighted, including resolving critical **DSS capacity issues**, implementing numerous hotfixes, and adding important UI improvements and data tracking features. However, a major portion of the discussion was dedicated to the deep-seated challenges within the technology stack. The systems were originally architected for a different business model (postpay) and now struggle with the demands of prepay operations, leading to tight SLAs and constant firefighting. Furthermore, the codebase has significant technical debt, with logic that is haphazard and doesn't scale. This results in a fragile foundation where solving one problem often uncovers several more, consuming immense developer time for troubleshooting rather than building new capabilities. ### **Vision and Roadmap for the Upcoming Year** The strategic focus for the upcoming year centers on operational stability and reducing friction, rather than flashy new features. The primary goal is to **reduce reliance on legacy systems and architecture** to alleviate chronic pain points and create a more scalable foundation. A key part of this effort is the planned consolidation and sunsetting of outdated, redundant reports. The roadmap is also heavily focused on optimizing core processes: streamlining account setup and data ingestion workflows, critically reviewing and adjusting validation logic and error handling, and significantly improving internal and external documentation. The ultimate aim is to reach a baseline where the platform reliably meets the fundamental expectations it was sold on, before expanding its feature set. ### **Operational Improvements: Intake Process & Build Templates** A significant operational update was the development of a new, transparent intake system for requests and issues. This system provides a public URL where any stakeholder can submit items, which are then automatically populated into a prioritization board for the development team. This initiative aims to create visibility and manage expectations by clearly communicating how requests are evaluated and scheduled, acknowledging that not everything can be addressed immediately due to capacity constraints and system complexities. Separately, a major technical deep-dive occurred on **build templates**, which are crucial for accurate data mapping. A new internal view was demonstrated, revealing issues like duplicate templates and mismatched units (e.g., pounds vs. gallons) that cause downstream errors. The future state envisions a centralized, version-controlled template system that enforces consistency, dramatically reducing setup errors and manual correction work. ### **Strategic Data Initiatives and Vendor Integration** Looking further ahead, the discussion touched on strategic data projects necessary for future scalability. One key initiative involves enhancing master data management by integrating third-party utility vendor information (e.g., from Arcadia). The goal is to create a enriched vendor database that includes metadata such as authentication requirements (MFA), data availability, and typical performance. This knowledge would then be embedded directly into setup workflows and customer-facing interfaces, proactively guiding implementations and setting accurate expectations. This represents a move from reactive problem-solving to proactive, data-driven process design, which is essential for handling a growing and diverse customer base efficiently.
Medline - Constellation Working Group Session 1 of 2 Funding & Payment Files
## **Summary** The meeting focused on resolving ongoing challenges in the payment reconciliation process between Medline and the Constellation team, facilitated by PayClearly. Key topics included establishing protocols for handling duplicate payments, clarifying the "mock bill" process, executing a detailed reconciliation of historical funding file discrepancies, and aligning on procedures for security deposits and new account onboarding. ### **Handling Duplicate Payments in Funding Files** A process was agreed upon to prevent funding duplicate payments, which had previously led to operational inefficiencies and reconciliation issues. - **Revised Process:** If a duplicate is identified in a funding file, the Constellation team will be notified, and a revised file will be requested instead of funding the duplicate items. This avoids the need to fund and later refund payments, streamlining the reconciliation. - **Communication Protocol:** Notifications for duplicate identification and cancellation will be sent to a specific distribution list, including the OPS email and key team members, to ensure clear and centralized communication. - **Turnaround Commitment:** The Constellation team committed to providing corrected files on the same day if issues are communicated in the morning, minimizing delays for other payments in the batch. ### **Clarification of the "Mock Bill" Process** The nature and purpose of the "mock bill" process were clarified, as it was identified as a primary source of duplicate payments in November. - **Process Definition:** A "mock bill" is not an estimate but an **interim payment of actual charges** captured from a utility portal or via phone when a detailed invoice is delayed. It is initiated within 10 days of a due date to avoid service disruption. - **Ideal Workflow:** When the actual, detailed invoice later arrives, it is processed in the system for analytics, and the interim record is archived. The system error in November occurred because this archival did not happen, causing both the interim and actual invoices to be paid. - **Strategic Use:** The process is intended to be a sparingly used exception for critical situations, not a standard rule, to maintain clean payment records. ### **Reconciliation of Historical Funding File Discrepancies** A detailed exercise is underway to reconcile outstanding differences in payment files, categorizing discrepancies to determine necessary funding adjustments. - **Categorizing Discrepancies:** Differences fall into three main categories: - **"Missed" Files:** Payments that were short-paid or deleted by Medline, often due to prior direct payments to the vendor or suspected duplicates. - **"Sample" Files:** Early test files mistakenly included in the reconciliation analysis that require no funding action. - **Large Short Payments:** Instances where a significant portion of a file was not funded, requiring line-by-line validation. - **Three-Step Validation:** For each discrepancy, the team will: 1) confirm if Medline paid directly, 2) verify if PayClearly also paid, and 3) check with the utility to see how the payment was applied to the account. - **Next Steps:** The Constellation team will complete a line-item review of all notes to produce a "true-up" analysis, identifying exactly which payments require funding or correcting entries to bring accounts to a zero balance. ### **Addressing Security Deposits and Account Mapping** The reconciliation exercise uncovered a need for standardized procedures for handling security deposits and correcting facility account mappings. - **Security Deposit Process:** A dedicated balance sheet account is used for security deposits paid to utilities and refunds received. - The team will provide specific general ledger account instructions for applying both deposits and refunds. - For new or large deposit requests (e.g., over $100,000), the Constellation team's assistance will be leveraged to potentially negotiate waivers. - **Account Alignment:** Instances were found where utility accounts were mapped to the incorrect internal facility. Instructions for correcting these mappings or adding new accounts should be sent to the designated operations contacts. ### **Onboarding New Accounts from an Acquisition** The process for onboarding utility accounts from a recent acquisition (Microtech) was discussed to ensure no service disruptions. - **Urgency:** With payables transitioning imminently and billing cycles approaching, there is a risk of disruption if accounts are not onboarded promptly. - **Action Plan:** Medline will forward the account details (approximately nine facilities) for quick onboarding. If necessary, Medline will make interim payments directly to utilities while the onboarding is completed, with both teams committing to clear communication on the process status.
Daily Progress Meeting
## Summary The meeting focused on progress updates across several development and reporting workstreams, followed by the assignment of a new task aimed at automating a manual process. ### Progress on Automated Reporting and Data Management The discussion centered on the management and communication of automated data refreshes for critical reports. - **Tracking and Inactive Account Reporting:** A tracking system for Data Services (DSS) has been established. An "inactive report" was created and shared to help filter out inactive vendor and client accounts from the master dataset. - **Setting Data Refresh Expectations:** A key point addressed was the refresh schedule for the master data used in reports. It is not real-time but refreshes on a scheduled basis (up to eight times daily) due to the large volume of data. The team agreed to clearly communicate this to stakeholders so they can manually refresh reports if they need the very latest information, managing expectations about data latency. ### Build Template User Interface Development Significant progress was shown on a new user interface (UI) for building templates, which allows users to search and view detailed account information. - **UI Functionality and Data Retrieval:** The developed UI enables users to select a client, which then dynamically populates the associated vendor and billing account numbers. Upon searching, the system retrieves comprehensive account data including service type, addresses, and related meter line items from the backend database. - **Feedback and Next Steps:** The interface was demonstrated and received positive feedback for being helpful. Immediate actions include pushing the current build to a testing environment and implementing minor labeling changes, such as renaming "billing account" to "account number" to avoid confusion with other terms in the system. ### Updates on Power BI and User Interface Enhancements Updates were provided on the completion of a Power BI report and enhancements to another application's filtering interface. - **Report Deployment:** A Power BI report was successfully updated and deployed, with the related ticket marked as complete. - **Filtering Improvements for Vendor Data:** For a separate task, a button was added to a UI to filter test vendor data, along with a "clear all filters" button. This functionality has been deployed to a development server and is awaiting testing and deployment to a preview environment, pending confirmation of the correct database connection string. ### Deployment Processes and New Task Assignment The conversation clarified deployment responsibilities and introduced a new automation project. - **Clarification on Deployment Authority:** It was confirmed that deployments to the FDG Connect environment follow a standard pipeline process and can be executed by any team member, not just a specific individual. - **Introduction of Late Bills Automation Task:** A new task was assigned to automate a manual process described in an email. The goal is to analyze and potentially automate the matching of data between DSS and another system (UVM), specifically focusing on late bills. The team member is to investigate what can be automated easily and determine if parts can be delegated, with a progress check scheduled for midday. ### Technical Clarifications and System Logic A technical question was raised regarding the underlying business logic governing the system's data relationships. - **Question on Service Type and Line Item Association:** A developer sought clarity on whether the logic defining which service types are linked to specific possible line items is stored in the database or defined within the application/API layer. The preliminary response suggested this logic is likely condition-based within the application rather than explicitly defined in the database schema, with an offer to share more details from the relevant web app and API.
Bill Templates Review
## Customer The customer is responsible for energy management for a large campus comprising approximately 30 buildings. Their primary focus is on monitoring and controlling energy consumption across these facilities, as the cost per unit of energy is typically fixed through separate rate negotiations with suppliers. The role involves detailed tracking of utility usage at the individual building level to identify trends, implement efficiency measures, and manage operational budgets effectively. ## Success While the anticipated analytical capabilities of the platform have not yet been realized, the primary success lies in the thorough and detailed onboarding process. The customer has invested significant effort in providing comprehensive data, including mapping specific meter numbers to corresponding buildings and defining attributes for logical groupings (such as dormitories, academic buildings, and residential houses). This foundational work has created a clear and actionable roadmap for configuring the system once the core technical challenge is resolved. ## Challenge The most significant and persistent challenge is the platform's current inability to correctly process and display data from master utility accounts. For roughly 85-90% of the campus buildings, electricity is billed under a single master account from the supplier. Although the detailed invoice breaks down usage by individual meter numbers tied to specific buildings, the platform consolidates all this data under one umbrella account and location. This renders the system unusable for its core purpose: enabling month-to-month and year-over-year consumption comparisons for individual buildings. This fundamental disconnect has led to a year of frustration with no visible progress, causing the customer to disengage from using the platform entirely. Additional complications include occasional missed invoice capture without clear notification and the need for the solution to accommodate future changes in energy suppliers. ## Goals The customer's key objectives for using the platform are clear and directly tied to operational efficiency: - To analyze energy **usage** (not just cost) at the granular, building-by-building level. - To track and compare consumption metrics month-over-month and year-over-year for each specific building. - To utilize predefined building attributes to view aggregated data for logical groups, such as all dormitories or all academic buildings. - To have a reliable system that continues to function correctly even when the organization switches energy suppliers, ensuring consistent data visibility. - To eventually have historical data corrected and properly allocated to the correct building locations for accurate long-term trend analysis.
Due Dates Report
## Customer The customer is a large organization operating in the healthcare sector, managing a high volume of vendor payments. They have recently gone live with the payment automation platform and are actively using it to process hundreds of payments, with a strong preference for electronic payment methods (virtual card, E-check) over paper checks. Their internal teams are deeply involved in daily operations, pulling data directly from the platform to share with their own clients, indicating a need for accurate, real-time data synchronization. ## Success The most significant success has been the effective **automation of payment processing**, particularly for repeat vendors. The platform's system for credentialing and routing vendors for the first time, though it requires an initial investment of time, successfully sets up subsequent payments for faster, automated processing. This foundational work is creating efficiency for future payment cycles. Furthermore, the implementation of rules to prioritize payments based on due dates-and the ongoing development of a system to push reissued payments with past due dates to the front of the queue-demonstrates a collaborative effort to refine the service around the customer's core need: ensuring vendors are paid on time. ## Challenge The primary challenge centers on **data consistency and clarity in payment status reporting**. A critical discrepancy has been identified between the "cleared date" for payments returned via the API and the date displayed within the platform's user interface (PayClearly). This is causing confusion for the customer's team and their end-clients, as they see different information in different places. Initial analysis points to a potential timezone conversion issue between UTC (used in the API) and local time displays in the app, but it requires specific payment examples to diagnose fully. A secondary, related challenge involves the process and timeline for **reissuing failed or canceled payments**. When a payment method fails and needs to be reissued, it triggers a re-enrollment and verification process that is currently treated as a new setup task, taking up to five days. This timeline conflicts with the customer's expectation for a swift resolution, especially when the original payment is already past due. The current system does not systematically prioritize these reissue workflows over other pending payments, leading to potential delays in sending out the corrected payment electronically. ## Goals The customer's key goals for using the platform are: - To achieve complete, reliable, and transparent automation of their accounts payable workflow. - To maximize the rate of electronic payments (virtual card, E-check) and minimize the issuance of paper checks. - To have clear, consistent, and accurate data across all touchpoints-API, platform reports, and user interface-to support internal and external reporting. - To ensure vendor payments are consistently made by their due date, improving vendor satisfaction and operational reliability. - To scale payment volume confidently, with the assurance that platform capacity and processes can handle significant increases without degradation in service level or timeliness.
Reports / Data Needed for Success in 2026
## **Customer** The customer in focus is a large enterprise client, likely within the healthcare or multi-site facility management sector, utilizing a comprehensive utility billing and data management platform. Their role involves overseeing complex portfolios with numerous locations and utility accounts, requiring accurate data aggregation, reporting, and financial reconciliation for energy spend and sustainability tracking. Their background indicates they rely on the platform not just for bill pay but for critical operational reporting, budgetary analysis, and meeting regulatory or internal ESG (Environmental, Social, and Governance) disclosure requirements. ## **Success** The most significant success achieved was the **identification and manual resolution of pervasive data quality issues** affecting critical consumption and cost reporting. For the customer's major portfolios (e.g., Hexpoal, St. Elizabeth, Munson), the root causes of reporting inaccuracies-such as double-counted usage lines, incorrect unit measures, and misclassified bills-were systematically diagnosed. A dedicated cleanup effort corrected historical data for the 2025 period, restoring confidence in the accuracy of location-level utility consumption and spend figures. This foundational work transformed a situation of unreliable data into a cleansed, actionable dataset, which is essential for their energy management and financial forecasting. ## **Challenge** The primary challenge revolved around **systemic data integrity and process reliability issues** that undermined trust in the platform's core functionality. Key problems included: - **Data Inconsistencies:** Persistent errors like double-counted usage, incorrect service dates for delivered commodities (e.g., propane), and bills being mis-categorized (e.g., full-service vs. supply-only) led to inflated or inaccurate consumption and cost reports. - **Reconciliation Complexity:** The bill payment and funding reconciliation process, especially involving emergency payments, was described as a "spaghetti" of interconnected files and dates. The lack of real-time data synchronization with the payment processor made manual, line-by-line reconciliation a daily, time-consuming burden for the customer's operations team. - **Unreliable Monitoring Tools:** Key customer-facing health checks, like the "Bill Health Report," were not considered a reliable source of truth due to issues with virtual account mapping and data lags, leaving the customer without a clear, automated way to identify missing bills or data problems. - **Communication Gaps:** There was a noted lack of proactive communication regarding system status updates (e.g., processing delays, feature outages) and missing data requirements (e.g., expired portal credentials), forcing the customer into a reactive "firefighting" mode instead of strategic management. ## **Goals** The customer's goals, as inferred from the discussions, are centered on achieving stability, automation, and self-sufficiency: 1. **Accurate and Automated Data Validation:** To have automated checks and validation rules within the platform that proactively flag data anomalies (e.g., usage spikes, missing bills) before they impact reports, eliminating the need for manual forensic analysis. 2. **Transparent and Proactive Communication:** To receive clear, timely notifications for any system issues, processing delays, or upstream data failures (like missing bills) through dedicated channels or in-platform alerts. 3. **Streamlined Financial Reconciliation:** To have a redesigned, intuitive payments interface that clearly separates normal and emergency payments, supported by better APIs from the payment processor to enable automated, accurate matching of funding requests and customer deposits. 4. **Self-Service Reporting Capabilities:** To move away from custom, hard-coded report builds and toward a future state where they can access all their data via a robust API or standardized data export (like a "Sheets" view), allowing their team to build the specific reports they need without ongoing development requests. 5. **System Stability and Feature Completion:** To see promised features (like multi-select vendor filters) delivered on committed dates and to have core platform stability so that their team can rely on it for daily operations without unexpected disruptions.
Training
## Meeting Logistics and Scheduling Adjustments The primary focus was on coordinating meeting schedules and addressing an administrative oversight. - **Rescheduling a key meeting:** The meeting with a party referred to as "pay clearly" was rescheduled for the afternoon. While a preferred time was requested, there was initial confusion as the invitation was not received by one participant, indicating a potential communication gap in the calendar system that required a manual forward of the invite. - **Upcoming holiday coverage:** There was a brief acknowledgment of the upcoming Christmas holidays, with coverage plans being discussed to ensure support for an offshore team remains uninterrupted during the period. ## Operational and Administrative Follow-ups The discussion shifted to immediate email correspondence and a procedural compliance issue. - **Urgent email responses:** An immediate action was required to respond to an email from a colleague named George, highlighting a time-sensitive communication that needed attention. - **Training compliance concern:** A significant operational issue was raised regarding an individual who had not completed mandatory training. This was stated as an ongoing problem ("Just doesn't end"), suggesting a recurring challenge with ensuring team-wide adherence to required protocols.
Bill Templates Review
## **Summary** The meeting consisted of two primary segments: an initial internal team sync focused on ongoing technical development tasks, followed by a strategic discussion in preparation for a crucial client call aimed at securing a major year-end deal. ### **Internal Development Updates** This section covered the status of various engineering and data-related projects currently in progress. - **Build Template System and Tracking:** Work is underway to add tracking for template creation, though a decision was made to continue allowing creations for now. The next major step is to develop a feature allowing operators to assign specific build templates to bills, which will be essential for data enrichment. A foundational UI page has been created to display structured JSON build template data, which will serve as a base for future features, including filtered views by vendor, client, and client account. - The immediate task is to finalize and deploy the tracking mechanism for template creation. - A new, dedicated page is planned to allow filtering of build template data by vendor, client, and client account number, pulling information directly from the SQL database to show all related service accounts, meters, and line items. Performance optimization for database queries is a noted concern. - The existing UI for viewing per-bill template data is considered valuable for debugging and will be retained, with potential future integration into individual bill details. - **Power BI Reporting and Vendor Management:** Updates were provided on the deployment of a Power BI report and work on a vendor management feature. - A Power BI report named "System Status Bills" has been redeployed to the production service, with minor changes pending. - For ticket PS132, a new "BD Enrolled" field has been successfully added to the edit vendor form. The requirement for this setting to be "filterable in various areas" was clarified to mean adding a filter (similar to the existing Active/Inactive filter) on the vendor management page, which has been implemented. Extending this filter to other reports is a potential future step. - **Report Request for Closed Accounts:** A new report was requested to identify client-vendor accounts that have been marked as final or inactive in the legacy data services (DS) system. This is needed to avoid spending processing time on closed accounts, especially with the upcoming closure of a specific client's accounts. - This will likely be created as a separate report to maintain performance, rather than integrating it into an existing one. - The team will verify if the necessary "inactive" timestamp data is already being stored. ### **Preparation for Critical Client Call** The discussion shifted to preparing for an imminent call with a key client (Yori), where the success of a significant pending deal is at stake. - **Presentation Strategy and Objectives:** The goal of the call is to instill confidence and facilitate the signing of a major deal before year-end. The presentation will cover recent work for three specific clients: Hexpo, St. Elizabeth, and Munson. - The update will be presented directly by the project lead, who is closest to the technical details. - The narrative will focus on issues identified, short-term fixes applied, and the long-term solutions being implemented. - **Client-Specific Data Correction Status:** Progress on correcting data issues for the clients in question was reviewed. - **Hexpo:** A comprehensive overhaul of all accounts has been completed. - **St. Elizabeth:** Data corrections are nearly complete, pending resolution for one account with missing historical data. - **Munson:** A review based on previously identified issues will be finished imminently. - The next planned step is to perform the same comprehensive "sweep" exercise done for Hexpo on St. Elizabeth and Munson accounts, though this is scheduled for after the holidays. - **Communication on System Validations:** A key point of discussion was how to communicate the state of system data validations. - It was confirmed that while some validations had been relaxed in the past to process bills faster, this had not been explicitly communicated to the client. - The strategy for the call is to state that new validations are being *added* to improve data quality, which aligns with the client's stated preference for correctness over speed. This is seen as a way to build confidence without delving into past changes. - **Outstanding Client Requests and Quick Wins:** Several other client requests were highlighted as potential topics or opportunities. - **Bill Health Report:** An investigation is needed into why certain accounts missing usage data are incorrectly showing a "green" healthy status instead of "orange." This is expected to be a point of discussion. - **Multi-Vendor Selection:** A long-standing request for a toggle or multi-select dropdown to assign multiple vendors or commodities to a single bill was discussed. While not a current priority, if it is a relatively simple fix, implementing it could serve as a valuable "quick win" to demonstrate responsiveness. - **Future Onboarding Process:** There was consensus on the need to review and improve the onboarding process for new customers to prevent recurring data issues, emphasizing better upfront organization and setting realistic expectations about data manipulation within the system.
Bill Template Review
## **Summary** The meeting focused on a critical examination of the data processing pipeline, specifically regarding how bills are matched to existing "build templates" and client accounts. A significant concern was raised about the potential systemic creation of duplicate accounts due to mismatches in the current logic, which could lead to data integrity issues and the propagation of incorrect bill data. The discussion revolved around understanding the current matching algorithm, identifying existing problems, and outlining steps for analysis and remediation. ### **Core Problem: Potential for Duplicate Accounts and Unused Build Templates** The primary issue identified is that the system may be failing to correctly match incoming bills to existing virtual accounts and their associated build templates. This failure could cause the system to create new accounts unnecessarily. - **The Risk of Skipping Build Templates**: When a bill doesn't match an existing account, it bypasses the crucial "build template." This template contains essential safeguards and correct metadata (like unit of measure and observation type). Without it, the system infers this data, which can be incorrect and then propagate to all future bills for that erroneously created account. - **Consequence: Data Duplication and Corruption**: The unchecked creation of new accounts likely means there are numerous duplicate accounts in the database that need to be identified and merged. This undermines data consistency and reporting accuracy. ### **Deep Dive into the Current Account Matching Logic** The conversation drilled into the technical specifics of how the system currently attempts to match an incoming bill's account number to a record in the database. - **Matching Key: Vendor ID and Account Number**: The fundamental matching logic uses the combination of the vendor ID and the account number from the bill to find a corresponding client account. - **Discovery of Data Normalization**: It was revealed that the system already stores a normalized version of the account number (in an `account_number_raw` column), where spaces and dashes are removed. However, the initial understanding was that the matching logic used the original, non-normalized `account_number` field. - **Clarification on Matching Process**: A key clarification was made: the system currently matches based on the original `account_number` column, not the cleaned `account_number_raw`. This means variations in formatting (like added dashes or spaces) would prevent a match, directly contributing to the duplicate account problem. ### **Analysis and Investigation Steps** To understand the full scope of the problem, several immediate investigative actions were proposed. - **Data Export for Analysis**: A request was made to export the full list of client accounts (approximately 27,000 records, excluding inactive ones) for external analysis. The goal is to manually or programmatically search for potential duplicates. - **Audit for DSS-Created Accounts**: A critical task is to determine if the system currently flags which accounts were created by the DSS pipeline itself. If such a flag doesn't exist, it becomes difficult to audit the scale of the issue. - **Implementing Fuzzy Matching**: To identify "near duplicate" accounts that differ slightly due to typos or formatting, the application of fuzzy logic matching techniques on the account number data was suggested as a necessary next step. ### **Proposed Solutions and System Changes** The discussion concluded with actionable proposals to stop the problem from worsening and to fix the existing data. - **Halt Automatic Account Creation**: The most urgent change proposed is to stop the system from automatically creating new accounts when a match isn't found. Instead, such cases should be routed to a manual review process by an operator to prevent further database clutter. - **Implement Creation Tracking**: A system must be put in place to track and flag every new account created by the DSS process. This is a prerequisite for any cleanup operation. - **Review and Revise Matching Logic**: There is a need to re-evaluate the matching logic. The use of the normalized `account_number_raw` field for matching was debated as a way to be more tolerant of formatting differences and reduce false mismatches. - **Future Feature: Template Association View**: A longer-term idea was floated to create a feature allowing users to view all historical bills that have been successfully matched to and processed using a specific build template, enhancing transparency.
Faisal <> Sunny
## **Customer** The customer is a company leveraging the platform for utility bill management and energy data analytics, specifically for multiple commercial clients in the energy sector. Their role involves overseeing the accuracy and completeness of utility bill data (including electricity, propane, and other commodities) for reporting and operational decisions. The background indicates they are a sophisticated user dealing with complex data ingestion, validation, and reporting workflows, with a focus on ensuring data integrity across historical records and future bill processing. ## **Success** The most significant success has been the proactive identification and systematic remediation of critical data quality issues that were impacting reporting accuracy. A comprehensive review of historical data for key clients has been completed, addressing major problems such as double-counted usage and missing bills. This effort has resolved a substantial portion of the inaccuracies, bringing historical data to a state estimated to be 80-90% reliable for consumption reporting. Furthermore, the process has successfully pinpointed root causes, such as human entry errors and account setup problems, enabling the team to move beyond firefighting and toward implementing preventative, long-term solutions. ## **Challenge** The primary ongoing challenge is related to systematic data ingestion and setup errors, particularly for non-metered commodities like propane deliveries. A core issue is the platform's inability to correctly handle "service dates" for delivered commodities (e.g., propane cylinders), leading to misallocated usage across monthly reporting periods. This stems from the underlying data model assuming all usage is metered. Additionally, ensuring complete bill capture requires correct credential setup for each utility, an operational process that has gaps. The unreliability of automated "bill health" reports creates a lack of trust, as users cannot distinguish between real problems and false positives/negatives. Finally, the reliance on a legacy data system (DDS) that is ill-suited for current workflows complicates the implementation of robust, validated data templates. ## **Goals** The customer's goals, as reflected in the discussion, are multi-faceted: - **Achieve 100% Data Accuracy:** The ultimate aim is to have completely reliable historical and ongoing data for all clients, eliminating errors in consumption reporting and bill-level details. - **Implement Preventative Safeguards:** A key goal is to operationalize "bill templates" to enforce consistent data structure during ingestion, thereby preventing setup errors, double-counting, and mapping issues at the source. - **Establish Transparent Processes:** They seek clear, trustworthy validation and monitoring processes, such as a reliable bill health report, to independently verify data quality without constant manual intervention. - **Transition to Robust Systems:** There is a goal to migrate critical validation logic away from the problematic legacy system (DDS) to a more suitable environment (e.g., DSS or UBM) to support sustainable, accurate data pipelines. - **Ensure Continuous Improvement:** The intent is to move from reactive fixes to a proactive model where insights from data issues directly inform product and process improvements to prevent recurrence.
UBM Q1 Planning (Session #2)
## Summary The meeting centered on addressing critical data quality and operational challenges within the platform, primarily stemming from gaps in the customer onboarding and bill setup processes. The discussion evolved into planning for system improvements and prioritizing a roadmap for the upcoming year. ### Onboarding Process and Data Source Gaps A core issue identified was the disconnect between customer expectations set during onboarding and the data that ultimately appears in the UBM platform. Currently, onboarding definitions (e.g., locations, vendors, utility types) are managed by CSMs and Data Services (DS) but are not systematically enforced or visible within UBM. This creates a "black box" where discrepancies between what a customer agreed to and what they see are difficult to diagnose. - **Lack of Systematic Validation:** The UBM team ingests bills from Data Services (DS) and Data Services System (DSS) assuming the data is clean and correctly attributed. There is no mechanism to validate incoming bills against the original onboarding template or agreement. - **Difficulty in Root Cause Analysis:** When data issues arise (e.g., double-counting, missing bills), it is extremely challenging to determine if the error originated with the customer's provided information, the CSM/DS handoff, DSS processing, or UBM ingestion. This leads to lengthy manual investigations. - **Proposed Solution Direction:** The conversation explored creating a "source of truth" for onboarding agreements, potentially within UBM's UI or linked from HubSpot. This would memorialize the expected data structure (location, vendor, utility type, service type) so any deviation in incoming bills could be automatically flagged for review. ### Bill Setup, Validation, and Error Management The team delved into the technical intricacies of bill validation, particularly concerning observation types (e.g., generation, demand) and how they are assigned based on utility type and service (distribution vs. supply). - **Historical Data Complications:** A specific pain point is translating historical bills that contain only a single usage line into the platform's distinct distribution and supply virtual accounts. A default rule (assigning to distribution) has been used, but this can cause reporting discrepancies. - **Reinventing Business Logic:** A concern was raised about the risk of duplicating the complex business logic that already exists in DSS and UBM for determining valid observation types. Any new system for defining expectations would need to carefully leverage or mirror this existing logic to avoid conflicts. - **Error Code Dilemma:** The team debated reactivating data validation error codes for postpaid customers. While this would prevent incorrect bills from being displayed, it would also delay bill visibility. The consensus was to aim for reactivation in Q1 or Q2, but only after implementing stronger guardrails, like the proposed build template enforcement. ### Roadmap Prioritization and Q1 Planning The discussion shifted to prioritizing feature development and operational work for the coming quarters, balancing firefighting with strategic platform improvements. - **Immediate Operational Priority (Item 0):** The highest priority is resolving the foundational data quality and onboarding definition issues discussed, as these are the source of ongoing "weekly fires" with customers. - **Customer-Facing Features:** Items like **Contract Management (Item 3)** are becoming a customer expectation and were prioritized over exploratory work like **Interval Data visualization (Item 2)**. However, any document management feature (contracts, onboarding sheets) should be considered holistically from an architectural standpoint. - **Platform Enhancements:** **Dashboard improvements for Bill Pay customers (Item 8)** and moving the **reporting engine off Google Sheets to a proper Azure API (Item 5)** were confirmed as important near-term goals. - **Q1 Realism:** It was acknowledged that Q1 will be heavily focused on operational issues and ongoing onboardings (e.g., Simon). Realistic feature development for the new roadmap items is expected to begin in earnest in February, March, and April. ### Process and Communication Improvements A key takeaway was the need to improve internal processes for managing requests and issues to reduce chaos and increase efficiency. - **Enforcing a Ticketing System:** A major goal for the next year is to mandate the use of a formal ticketing system for all bug reports and feature requests. This is intended to stop the flow of ambiguous requests via chat and email, which require excessive context-gathering from multiple teams. - **Upcoming Leadership Review:** The team prepared for an upcoming review meeting with leadership, aiming to communicate progress on 2025's challenges (DSS integration, validation issues) and set clear expectations for 2026. The message will include the new, stricter process for submitting and tracking work.
UBM Planning
## **Summary** The meeting focused on reviewing the current operational strategy, workload prioritization, and key technical issues within the UBM (Utility Bill Management) platform and its integration with PayClearly. The primary objectives were to assess processing queues for key customers, align on data integrity efforts, and coordinate on urgent system updates and payments. ### **Customer Prioritization and Team Workload** The team reviewed a prioritization list for customer accounts, focusing on those with past-due invoices and significant backlogs. - **Primary Focus:** Contractors Daniel and Bilal are primarily dedicated to **Victra** and **Ascension**, with the goal of cleaning up their entire queue in UBM, which includes invoices with due dates from the past, within the next 10 days, and beyond. - **Secondary and Expanded Focus:** Robert is handling **Sheets** and assisting with several Banyan customers (Artgraph, Aspen, Altus). The prioritization list was recently expanded to include **Big Y**, which Robert is now focused on cleaning up. - **Resource Allocation:** Resources are assigned based on due dates to distribute work evenly. Elizabeth primarily supports Banyan, while Karen handles more critical accounts like Medline, PPT, National Bank, and Park Ohio. A new contractor, Jacqueline, has begun assisting with the workload. ### **Operational Metrics and Reporting** A detailed report was discussed to track processing performance and queue status for prioritized customers. - **Report Structure:** The report shows the total number of accounts, invoices with past due dates, invoices due within 10 days, invoices due beyond 10 days, and the volume processed in the previous week. A **month-to-date processed invoices** metric will be added for better tracking. - **Data Discrepancy Resolution:** A discrepancy was noted between the number of accounts shown in the report versus other systems (e.g., 2700 vs. 3180). The team emphasized the need for consistent numbers across all reports and systems, prompting a cleanup initiative for closed accounts that may be skewing the data. - **Efficiency Goal:** The long-term objective is to evolve the UBM operator role into more of an **oversight function**, with an 80/20 rule where 80% of bills process automatically and only 20% require human review for exceptions. ### **Data Integrity and System Issues** Several technical and process-related issues were identified as impediments to smooth operations. - **Account Freezing:** A critical issue was identified where **every account for Ascension was frozen** due to a rules change, halting all processing. This was being actively resolved, with an expectation that accounts would be cleared within 30 minutes. - **Check Cancellation Policy:** The team analyzed the risk of shortening the check cancellation threshold from 21 days to 15 days. Data shows this would increase the risk of canceling a check that would later cash from 1% to **8.1%**, which carries potential repercussions like penalty fees or payment method blocks from utilities. - **Lab & Mock Bill Process:** Updates were made to the "Lab" process and Power BI filters to correctly exclude closed accounts from reports. The process for handling bills where direct system access is unavailable (previously "mock bills") was renamed to **"Expedited Actual Charge" (EAC)**. A co-op is manually retrieving these bills for prioritized customers. ### **Payment Processing and System Updates** Key updates and testing requirements for the payment system integration were highlighted. - **PayClearly File Enhancement:** A two-part update to include 10 additional fields in the UBM billing account and the PayClearly payment file is in the final stages. Although "implemented," it requires **end-to-end business testing** and sending a sample file to PayClearly before deployment. - **Credit Card Payments:** For emergency credit card payments (e.g., for Victra), a solution needs to be operationalized. The action is to discuss with treasury about obtaining a dedicated P-card for such payments and to work with PayClearly on how to handle the associated AP files and fund movement. - **Reporting Cadence:** Mary will take over sending daily processing status updates (starting and ending queue counts) and the weekly IR's report to provide better visibility into workflow. ### **Cross-Functional Coordination** Coordination with the PayClearly team was emphasized to resolve data discrepancies and improve automation. - **Data Discrepancy Investigation:** A meeting was scheduled with PayClearly (Tyler and Amanda) to investigate discrepancies between the payment status data shown in UBM via API versus what is displayed in the PayClearly application itself. The urgency is driven by the need for accurate reporting for clients like Medline and to eliminate manual report pulling. - **API and Real-Time Data:** There is a pressing need to establish a consistent cadence with PayClearly to push for real-time data availability in UBM, which would eliminate significant manual work currently spent on extracting reports from their platform.
DSS Daily Status
## Summary The meeting focused on reviewing ongoing development tasks, clarifying requirements for a new feature, and planning next steps for system improvements. Key discussions revolved around debugging account-related errors, recent UI enhancements, and the strategic rollout of a new system module for managing invoice templates. ### Status Updates and Immediate Priorities The session began with a round of status updates to align on current work and urgent issues. The primary technical concern identified was the need to investigate account-related errors within the system. These errors lack a proper logging mechanism, making them difficult to trace through conventional logs. The team recognized that resolving this issue will likely necessitate significant changes to the underlying processes. Additionally, there was a follow-up on providing example invoices from Hexpull to aid in debugging, with a plan to collaborate on this task later in the day. ### Development Progress on UI and Reporting A significant update was provided on the completion of a front-end enhancement. The work on ticket DS131 successfully added a date picker component to date fields within the file details page. This feature now allows users to either select a date from the picker or type it manually, improving usability. The changes have been deployed to a development test server and are functioning correctly. In parallel, work was done to optimize the database query for the "builds only" report. This optimization is complete and currently undergoing testing in the development environment before being applied to the production database. ### Strategic Implementation of Bill Templates Feature The core of the meeting centered on planning the introduction of a **Bill Templates** management module within the DSS (Data Support System). The goal is to create a dedicated, read-only page where users can view configured invoice templates for different clients and vendors. The discussion clarified that this should be an iterative process, starting with a view-only interface to validate data and workflow before adding edit, add, or delete functionalities. - **Initial Scope and Design:** The feature will reside on a new page in the DSS application, separate from existing modules. The envisioned design includes searchable filters for clients and their associated accounts, as selecting an account is crucial for fetching the correct template data. The displayed information should mirror the structured JSON data found in the production database, showing fields related to commodities, currencies, units, and other vendor-client interrelated data. - **Data Source Clarification:** A key technical point clarified was the distinction between templates used for invoices uploaded via the DSS versus those from other sources (FTG). It was confirmed that when an invoice is uploaded via DSS without an account number, the system cannot fetch the corresponding template structure, highlighting the account number's critical role in the data retrieval logic. - **Future Roadmap:** While the immediate goal is a read-only view, the architecture should be planned with future interactivity in mind. The long-term vision is to allow users to directly edit template fields within this interface and save changes back to the database, transforming it into a comprehensive configuration tool. ### Analysis and Improvement of Account Matching Logic A planned investigation into the system's account matching logic was outlined. The task involves analyzing whether implementing an improved matching algorithm-such as stripping out spaces, dashes, and other extraneous characters from account numbers during comparison-would significantly increase match rates. The expected outcome is a quantitative analysis to understand the potential impact radius: whether this change could resolve a majority (e.g., 80%) of current matching failures or only a minor portion (e.g., 10%). This analysis is crucial for prioritizing development effort and understanding the scale of improvement for data processing workflows. ### Clarifications and Next Steps for the Week The meeting concluded with clear directives for the upcoming work period. The developer assigned to the Bill Templates feature will proceed with implementing the read-only view page based on the clarified requirements. The team also reaffirmed the need to verify the displayed template data with system operators to ensure completeness and accuracy, acknowledging that the initial release might not contain every possible data field. All other discussed action items, including the account error investigation and the matching logic analysis, were confirmed for immediate follow-up.
UBM Errors Check-in
## Summary The meeting focused on resolving data inconsistencies within the billing and payment systems, specifically addressing ongoing bill error corrections, investigating a critical payment date discrepancy reported by a key customer (Medline), and confirming the resolution of a separate Power BI reporting issue. ### Bill Error Corrections and Process Refinement The discussion centered on addressing and refining the process for correcting bill errors flagged by the quality assurance team. A significant number of bills with past-due amount and unit of measure issues were identified, with a plan to reduce repetitive work by improving the query logic used to generate error reports. - **Addressing Error Reports:** The team reviewed the volume of bills with specific errors, such as "past due amount" and "2015-2016" issues. It was noted that many bills appear across multiple files, leading to duplication in the correction process. A key action is to refine the underlying queries to avoid sending the same bills for review repeatedly. - **Handling Inactive Customer Data:** A specific case involved approximately 50 bills from a deactivated customer (Renu). The decision was made to keep the customer record active for now but to ensure all automated payment processes (AP/bill pay) are switched off to prevent any unintended downstream financial transactions. - **Next Steps for Error Resolution:** The immediate plan is to send the updated files containing the current batch of errors (roughly 75 bills after exclusions) for processing. A follow-up call with the quality assurance analyst (Mary) was suggested to collaboratively refine the query filters and improve the efficiency of the error identification workflow. ### Investigation into Payment Date Discrepancy A substantial portion of the meeting was dedicated to investigating a critical discrepancy in payment dates between the internal platform (UBM) and the payment processor's (PayClearly) data, which was raised by the customer Medline. The team conducted a detailed forensic analysis to pinpoint the source of the inconsistency. - **Identifying the Problem:** The customer reported seeing different "cleared" dates for the same payment across two sources: the UBM platform and a detailed report from PayClearly. For example, one payment showed as cleared on December 11th in UBM but on December 10th in the PayClearly detail report. - **Forensic Data Analysis:** The team examined various data sources, including the UBM payment status tab, generated AP/PC files, and the PayClearly API data within Power BI reports. The goal was to trace the origin of each date field (e.g., file creation date, sent date, cleared date) and understand how they are stored and displayed (UTC vs. local time conversion). - **Pinpointing the Source of the Discrepancy:** The investigation concluded that the data shown in UBM is pulled from the PayClearly API. The discrepancy arises because PayClearly provides **two different data streams**: a real-time, detailed internal report and a slightly delayed API feed. The "cleared date" shown in the detailed report appears to differ from what is provided via the API, likely due to definitions of "cleared" or timing differences in ACH settlement. - **Resolution Path:** Since the root cause lies within PayClearly's systems and data definitions, the team agreed the issue must be escalated directly to PayClearly. The plan is to initiate a formal inquiry, involving previous points of contact (Tim, Tyler), to ask why their two data sources provide conflicting information for the same transaction. ### Power BI Report Status Verification The meeting briefly reviewed the status of a Power BI report concerning virtual account statuses, confirming that a previously identified issue had been resolved. - **Confirming Logic and Functionality:** The report's logic, which determines an account's status based on its associated virtual accounts, was reconfirmed. The rule states that if any virtual account is "open" or "unknown," the overall account status is "open"; it is only "closed" if all linked virtual accounts are closed. - **Issue Resolution:** A previous concern that this logic had stopped working was addressed. It was confirmed through recent communication that the report is now functioning correctly and displaying account statuses as defined by the agreed-upon business rules. ### Closing and Forward Actions The meeting concluded with an agreement on the immediate next steps, primarily focusing on external communication to resolve the core payment discrepancy issue. - **Primary Action:** The main forward action is to formally escalate the payment date discrepancy to the payment processor, PayClearly. This will involve starting a communication thread to seek clarification on why their API data differs from their detailed report data, ensuring all relevant internal stakeholders are included in the conversation for transparency. - **Ongoing Monitoring:** While the Power BI report issue was resolved, the discussion underscored the importance of monitoring such systems after changes are made to ensure business logic remains correctly applied.
Bill Templates Review
## Matching Logic and Workflow Failures The meeting focused on diagnosing critical failures in an automated billing processing workflow, specifically within a system referred to as DSS (Data Services). The core problem is a rigid, hierarchical matching logic that causes bills to fail processing or create erroneous new accounts when data doesn't perfectly align with predefined templates. The discussion revealed that the process begins by attempting to match incoming bill data (account number and vendor ID) against a database. If this initial match fails, the entire workflow halts, sending the bill to a "failure state" where it remains stuck in "DSS in progress" without any notification or rollback to a prior system (DDS). This creates a silent backlog of unresolved bills. Conversely, if a match is found, the system proceeds to fetch associated account templates, service accounts, and meter data. However, failures at any subsequent level (e.g., missing service accounts) cause the system to skip critical validation steps, leading to incomplete or incorrect bill processing. ## Issues with the Hierarchical Data Model A significant portion of the conversation dissected the complex and fragile data hierarchy used for matching. The process requires perfect matches across multiple sequential layers: Client Account -> Service Account -> Meters -> Line Items. This structure is problematic because: - **It assumes data perfection:** The logic expects immutable data fields like `build class` (e.g., full service vs. supply) and `service address` to be consistent and correctly inferred from every bill, which is rarely the case in practice. - **It creates single points of failure:** If an account number has a formatting discrepancy (like a missing dash or space), all downstream steps, including commodity type and rate code identification, are skipped or fail. - **It relies on unreliable data:** Key matching fields for line items, such as `caption`, `commodity item description`, and `unit description`, are often parsed inaccurately by other systems (implied to be an LLM), making consistent matches unlikely. ## Proposing a Fundamental Rethink of the Process The consensus was that the current "all-or-nothing" matching approach is unsustainable. The group advocated for a paradigm shift with two key principles: - **Eliminate auto-creation during ingestion:** The system should never automatically create new accounts or templates during bill processing. This practice has led to a proliferation of unvetted data and makes problem diagnosis extremely difficult. - **Embrace "fail fast and flag" over "guess and proceed":** It is preferable for the system to clearly fail a bill when a confident match isn't found and flag it for human review, rather than attempting to proceed with incomplete or guessed data, which results in hidden errors. The proposed solution involves implementing **fuzzy matching logic**, particularly for account numbers, to handle common variances like spaces, dashes, and leading zeros. This is identified as the single most impactful fix that could resolve a majority of matching failures. ## Analysis and Investigation Plan To move forward, the team outlined a data-driven investigation plan: - **Conduct a fuzzy logic analysis:** The first action is to query all historical account number matching failures, apply fuzzy logic (stripping special characters, spaces, etc.), and quantify how many would have been resolved. This will validate the potential impact of the proposed fix. - **Perform end-to-end data mapping:** There is a need to create concrete examples that map data from the source bills, through the database templates, to the UI. This will ground future discussions in real data and clarify the current system's behavior. - **Audit template origins and change management:** Questions were raised about who can create or edit templates and in which systems (e.g., a legacy system vs. UBM). A related issue is the lack of version control for templates, making it impossible to apply changes retroactively or understand the historical context of a bill's processing. ## Addressing Downstream Implications and System Trust The flaws in the matching logic have eroded trust in the entire billing data pipeline. The team connected these ingestion problems to downstream reporting issues, such as **observation mismatches and double counting** in analytics. Furthermore, the inability to reliably match bills means many are not linked to any client account in the system, rendering them invisible for management and reporting. A long-term vision discussed includes introducing **versioning for build templates**, allowing for controlled changes that apply only to future bills and providing an audit trail. The immediate goal, however, is to stabilize the ingestion process by relaxing overly strict matching rules and replacing them with more robust and transparent validation.
Bill Template Cleanup
## Summary The meeting centered on reviewing critical data quality and billing issues affecting three key customers: Hexpoal, Munson, and St. Elizabeth. The primary goal was to understand the scope of problems, align on root causes, and prepare a unified strategy for addressing them with the client in an upcoming meeting. ### Overview of Customer-Specific Cleanup Efforts A deep dive was taken into the cleanup work already performed for St. Elizabeth and Munson to identify patterns and recurring problems. The cleanup process involved a systematic sweep of customer accounts. This started by comparing account lists against a **Bill Health Report** to identify missing bills or erroneous data. The major issues uncovered fell into several categories: missing bills due to incorrect retrieval methods or unprocessed invoices, incorrect service dates (particularly for deliverable commodities like propane), missing or duplicate consumption lines, and account setup errors (e.g., distribution vs. full service). ### Root Cause Analysis of Identified Issues The discussion moved to diagnosing the underlying reasons for the persistent data problems, which are largely systemic. For **missing bills**, the cause is often a mismatch between expected and actual retrieval methods (e.g., web download vs. email), or bills being deprioritized and stuck in processing queues. **Incorrect service dates** are a fundamental challenge for delivered commodities (like propane or refuse), where the billing system must infer dates rather than receiving explicit service period data, leading to gaps that skew reporting. Many other errors stem from **incorrect initial account setup** that then cascades, causing ongoing data discrepancies. ### The Challenge of the Bill Health Report Reliance on the primary diagnostic tool, the Bill Health Report, was scrutinized due to its high rate of false positives and missed issues. While used as a starting point, the report is often misleading. It generates false positives for accounts with non-monthly billing cycles (like quarterly), closed accounts that haven't been updated in the system, and accounts with incorrectly configured service types. Consequently, the client has lost trust in this report, necessitating a new, more reliable process for identifying truly missing or erroneous data. ### Strategic Preparation for the Client Meeting The conversation focused on formulating a transparent and defensible position for the imminent meeting with the client, Jennifer. A key objective is to categorize all known issues into three clear buckets: systematic fixes needed from the development team, one-time cleanup tasks, and items requiring the client's own assistance. The strategy is to demonstrate that no new, unknown issues have been discovered with Hexpoal, Munson, and St. Elizabeth-only existing, understood problems. A major point of discussion was how to handle requests for **historical data cleanup**. The approach will be to carefully define what portion of historical errors are attributable to past processes versus current operations, as committing to a full, rapid historical cleanup may be unfeasible. ### Long-Term Solutions and Process Gaps The discussion concluded by acknowledging the need for enduring solutions beyond immediate cleanup. There is a clear need to **rethink how delivered commodities are handled** within the system to prevent ongoing date inaccuracies. Furthermore, there is a critical gap in **validation and error flagging**; some validations were muted in the past, allowing errors to slip through. Re-enabling and improving these automated checks is essential for preventing future issues and efficiently identifying existing ones, rather than relying on manual account-by-account review.
Teams Review
## Summary The meeting focused on resolving authentication and access issues related to SendGrid and its integration with Power BI for reporting and invoicing purposes. A significant portion of the discussion involved troubleshooting two-factor authentication (2FA) problems and establishing proper user access across different environments. ### Troubleshooting SendGrid Authentication Issues The primary challenge was gaining access to the SendGrid production account, which was protected by an unexpected two-factor authentication setup. The process was hindered because the authentication code was not being received via text message as expected. This blockage prevented access to crucial data needed for tasks like invoicing. - **Unexpected 2FA Block:** The account required a code from an Authenticator app, but it was unclear who had initially configured this security measure. Attempts to bypass it by requesting a code via SMS failed, as the expected text message was not delivered. - **Security Policy Complications:** The discussion acknowledged that while security teams (like Cyber) are essential, their protocols can sometimes create obstacles for operational tasks, creating a tension between security compliance and the need to perform job functions efficiently. ### Account Access and Permission Management A solution was engineered by directly managing user invitations and permissions within the SendGrid platform itself, sidestepping the problematic 2FA on the service account. - **Direct User Invitation:** Access was granted by sending direct invitations to a user's corporate email address for all three SendGrid environments: Development, Demo, and Production. - **Clarifying Account Types:** A distinction was made between user accounts (for human access to the dashboard) and service accounts or API keys (used for system integrations). The problematic 2FA was linked to a service account, not a standard user login. - **Permission Cleanup:** It was noted that duplicate entries for service accounts existed within the shared password manager (1Password), indicating a need for future cleanup to maintain credential hygiene. ### Security and Infrastructure Constraints The conversation highlighted broader organizational challenges related to security tools and development environments. - **Power BI Access Hurdles:** Internal security layers had recently been enhanced, which inadvertently blocked Power BI from accessing necessary data sources. This required additional work with other teams to re-establish connections for reporting. - **Platform Limitations:** The inability to run Power BI desktop on a Mac operating system posed a personal challenge for one participant, who explored using a remote virtual machine as a workaround. This investigative action itself triggered an alert from the cybersecurity team. ### Transition from SendGrid to Twilio Authentication An underlying cause for some login confusion was identified as an ongoing migration of authentication systems. - **Platform Migration:** Evidence suggested that authentication for the production environment was in the process of being transitioned from SendGrid's native system to Twilio's platform, which could explain irregular login behaviors and interface discrepancies. - **Billing Discrepancy:** During login attempts, an odd outstanding balance was observed on an invoice that did not align with the main billing records, though this was not resolved during the meeting. ### Resolution and Access Confirmation The meeting concluded successfully with access being restored through the alternative method of direct user account provisioning. - **Successful Provisioning:** The participant confirmed receipt of invites and successfully set up access to the Development environment, with instructions to repeat the process for Demo and Production. - **Achieved Outcome:** The core objective of gaining administrative access to SendGrid for operational management was achieved, moving past the initial authentication deadlock. This access will enable future management of settings, stats, consumption data, and invoicing.
Build Template Review
## Summary The meeting was a technical deep dive into the current implementation and issues surrounding the "build template" process. The core problem is that while the system is designed to use predefined templates from a legacy database to ensure data consistency in the UBM (Utility Bill Management) system, malfunctions in this process are causing significant data integrity issues for clients. The discussion aimed to diagnose why the template logic isn't working as intended and to plan a path forward for both immediate client communication and a long-term, reliable solution. ### Understanding the Build Template's Purpose and Current Flow The build template is intended to act as a source of truth, pulling predefined customer setup data (like vendor, account number, commodity type) from a legacy system to populate and validate bills processed in the DSS. Its primary function is currently to populate the legacy system's database for operator review, but its potential for enforcing data integrity is not fully leveraged. - **Current Process Flow:** The template logic is executed after initial processing (IPS) and priority checks in the DSS. - **Data Transformation:** Data from the PDF extraction is first converted into a structured invoice format, though key fields like account ID may be null at this stage. - **Template Fetching:** The system attempts to fetch the correct template from the legacy database using the extracted **vendor ID and account number** as the primary matching keys. - **Goal of the Process:** Proper implementation should resolve common issues like double-counting consumption and mislabeled bill types by ensuring all incoming bills conform to a predefined, correct setup. ### Diagnosing the Critical Failure in Matching Logic A central discovery was that the template matching logic is **hierarchical and sequential**, where a failure at the first matching step causes the entire process to fail, leaving the system to infer all data without the template's guidance. - **Hierarchical Matching Cascade:** The system performs matches in a strict sequence: first by account number and vendor, then by service account, meter, and line data. If the initial account+vendor match fails, subsequent steps are not executed. - **Root Cause of Issues:** Many errors stem from this first match failing. A discrepancy between the account number on the PDF and the one stored in the legacy system template is enough to break the chain. - **Consequence of Failure:** When no template match is found, the DSS is left to infer all field values on its own, leading to the very data inconsistencies the template was meant to prevent. ### Identified Systemic Issues and Limitations Beyond the matching logic, several systemic and operational constraints exacerbate the problem, making fixes and historical data cleanup challenging. - **Inability to Edit Templates:** A major limitation is that once a template entry (e.g., an incorrect unit of measure) is added to the legacy system-whether manually or by DSS inference-it cannot be removed, only supplemented. This pollutes the source of truth. - **Lack of Visibility and Control:** Operations teams in UBM currently have no visibility into what the predefined template settings are. Their manual corrections are made in a vacuum, potentially creating new, inconsistent "versions" of the truth without updating the core template. - **Manual Historical Data:** A significant portion of the problematic historical bill data was processed manually, creating a baseline of inconsistencies that the automated system must now contend with. ### Implications for Client Reporting and Strategic Decisions The technical problems directly impact how to communicate with a concerned client (Hexpol/ULA) and what strategic remedy to propose for their and other clients' data. - **Communicating Realistic Solutions:** There is a strong emphasis on being transparent with the client about what can and cannot be fixed, distinguishing between solving issues for future bills versus remediating historical data. - **The Re-ingestion Question:** A strategic debate emerged on whether to perform a comprehensive "re-ingestion" or "re-scrubbing" of a client's historical bills once the root causes are fixed. This would treat the corrected data as a new onboarding, potentially reducing long-term manual review but setting a precedent other clients may request. - **Defining a "Golden Record":** A proposed solution involves creating a definitive, agreed-upon artifact (e.g., a comprehensive spreadsheet) that maps every unique combination of site, vendor, commodity, and account to a single, correct setup. This would serve as an immutable reference for both the AI and client agreement moving forward. ### Proposed Next Steps and Investigation Path The meeting concluded with a clear action plan to further diagnose the exact failure modes and to prepare for client communications. - **Deep-Dive Technical Documentation:** A follow-up was scheduled to document the exact step-by-step matching logic with concrete examples, clarifying what data is used and what happens upon match success or failure at each stage. - **Data Analysis for Client Discussion:** The team will pull the existing build template data for the Hexpol client to visualize the current "state of the truth" and assess how much of the data problem would be resolved by fixing the matching cascade. - **Redefining the Template's Role:** A key future goal is to evolve the build template from a simple population tool into an active **data integrity layer** that validates and corrects DSS output before it reaches the legacy system or UBM. - **Internal Strategy Alignment:** Before the client meeting, an internal sync is needed to align on messaging, deciding what fixes are promised for the future and what, if any, plan is proposed for historical data remediation.
Teams Review
Prep
## **Summary** ### **Upcoming Planning Sessions and Strategic Alignment** The meeting primarily focused on coordinating upcoming strategic planning sessions and aligning the team's roadmap for the upcoming quarters. The core objective is to establish a coherent plan for Q1 and the first half of 2026, with an emphasis on operational stability over new feature development. - **European Planning and Q1/H1 Roadmapping:** Discussions are centered around finalizing plans for the European market and the broader Q1/H1 2026 strategy. Multiple dedicated sessions are already scheduled throughout the week to work through these plans collaboratively. - **Operational Focus for Early 2026:** The anticipated roadmap for at least the first half of the coming year is expected to be heavily focused on resolving existing operational issues and removing blocking dependencies, rather than on fulfilling new feature requests. The goal is to solidify the foundation of the operational product. - **Preparation for Leadership Review:** A meeting set for Friday by a colleague (Gary) was discussed, framed as a "2025 review and 2026 planning" session. The expectation is that the outcomes from the team's internal planning sessions will be synthesized into a shareable format, such as key bullet points, to present for alignment and feedback from leadership. ### **Coordination and Scheduling of Key Meetings** Significant time was dedicated to aligning calendars and ensuring efficient preparation for the critical planning discussions. - **Internal Team Sync:** A specific internal sync was scheduled for Thursday to consolidate the team's position and finalize materials ahead of the broader review meeting. The timing was carefully coordinated around existing commitments. - **Integration with Ongoing Sessions:** There was a strong intention to leverage the existing planning meetings happening earlier in the week (Wednesday) to draft the initial materials, making the subsequent Thursday sync a final locking and review step. ### **Side Discussion: Research on Behavioral Science in Product Strategy** A separate, more personal topic was introduced regarding academic research into the application of behavioral science within product development. - **Request for Expert Contacts:** One participant is conducting MBA research on how behavioral science insights and nudges can drive product strategy and innovation, particularly focusing on the gap between user motivation and stated intentions. - **Networking Assistance:** The other participant agreed to search their professional network for potential contacts with relevant experience, whether from internal corporate roles (e.g., data scientists or behavioral science groups) or from a design thinking perspective that intersects with behavioral insights. The goal is to facilitate introductory conversations to gain real-world insights for the research project.
Daily Priorities
## **New Weekly Reporting Structure and Customer Oversight** A new standardized weekly reporting process has been implemented to provide comprehensive oversight of all customer accounts. This initiative involves both the download and operational sides of the business using a shared customer list template. The primary goal is to create a single source of truth for account status, proactively answer leadership questions, and eliminate frustration caused by data mismatches or missing information. The reports are intended to be high-level summaries rather than raw data dumps, providing leadership with actionable insights without needing to sift through numerous detailed documents. - **Implementation of a unified template:** A standardized template has been distributed and must be filled out weekly by the team. This is designed to wrap operational arms around all customers from initial download through final processing. - **Addressing account vs. meter data:** A key focus is reconciling the number of *accounts* versus *meters*, as invoices are processed by account. Initial discrepancies in the data are anticipated and will require investigation to align both teams on the true operational scope. - **Structured meeting schedule:** These reports will be reviewed in dedicated weekly one-hour meetings. The schedule is being adjusted, with plans to move to a regular Tuesday slot. ## **Customer Escalation: Aurora University Billing Data Dispute** A significant escalation occurred with Aurora University, centering on a dispute over the granularity of billing data accessible to the customer. The situation is complex due to conflicting internal and external communications about the core issue. ### **Conflicting Perceptions of the Problem** The customer's complaint, which escalated to senior leadership, framed the issue as an inability to extract usage data by meter from their invoices. Internally, however, the understanding is that the customer is requesting highly customized data widgets and analytics at the meter level, which would require new development work. - **Escalation to leadership:** The issue "blew up" via email, reaching high-level executives and creating urgency. Internal confusion persisted because the customer's stated problem did not match the team's understanding of the actual request. - **Potential root cause:** There is a strong suspicion that a sales or onboarding promise was made to the customer regarding data accessibility that the current system cannot fulfill without technical modifications. - **Next steps:** A meeting with the customer is being arranged for the following week to clarify their exact requirements, involve the development team to assess feasibility and potential cost, and determine a path forward. ## **System Functionality and Operational Checks** Concerns were raised about the stability and intended function of certain system safeguards, specifically within the billing extension process. A flag designed to halt invoices over $24,000 for customer approval was questioned, prompting a verification of its operational status. - **Verification of a critical control:** It was confirmed that the system flag for high-value invoices is still active and functioning, alleviating concerns that it had been accidentally disabled or removed. The inquiry appears to have stemmed from a one-off manual override or processing exception. - **Ongoing manual reporting:** Leadership has requested operator metrics from the UBM (Utility Bill Management) system. This report on individual operator clearance volumes is currently a manual process managed by a specific team member and has not yet been automated. ## **Holiday Schedule and Team Coverage** Coordination for the upcoming holiday period was discussed, with a focus on ensuring business continuity and managing customer expectations during days with reduced staffing. - **Overlapping time off:** Key team members have scheduled days off around Christmas Eve and Christmas Day. Plans are being made to ensure coverage for critical tasks, with an emphasis on processing all invoices before the holiday to avoid disconnection risks. - **"Scream-free" policy:** A clear boundary was set for the holiday period: while team members may be online for processing work, they will not be attending meetings or responding to emails where customer "screaming" or high-intensity complaints are expected. The goal is to enforce a peaceful operational window. - **Awareness of external closures:** It was noted that many clients and utility providers may also be shut down during this period, which will naturally reduce the volume of commodity pricing updates and related work. ## **Pending Actions and Upcoming Discussions** Several items were flagged for immediate follow-up and coordination in the coming days, indicating a busy operational pipeline. - **Data report consolidation:** A combined report from Data Services and UBM is being shared to serve as a potential single source of truth, though its data still requires full verification from all operational angles. - **Urgent customer meeting scheduling:** Efforts are underway to schedule a meeting with the Aurora University customer for Wednesday or Friday of the current week. The preference is to resolve the issue quickly and avoid letting it linger. - **Team resource planning:** The departure of a contractor has forced a reassessment of hiring needs. Discussions will determine whether to shift an existing team member and hire for their previous role or to hire directly for the vacant position. - **Financial oversight gaps:** An admission was made that billing for carbon accounting services has likely lapsed due to bandwidth constraints, potentially representing lost revenue. This area requires immediate attention to rectify.
DSS Updates Review
## Summary The meeting focused on progressing development work across several key areas, primarily involving data integrations, user interface features, and code deployment processes. The team reviewed current ticket statuses, discussed the implementation of a new build template management feature, and addressed ongoing efforts to streamline the integration between systems. ### Report Status and Next Steps A primary report is nearing completion, with only minor pending changes anticipated. The team is preparing to finalize this deliverable, acknowledging that some feedback for improvements may still be received from stakeholders. - **Finalizing a key report:** The report is largely ready, with plans to submit it after incorporating a few final adjustments. This step is crucial for closing out a significant project milestone. ### Technical Integrations and Ticket Updates Significant discussion centered on technical tasks related to data flow and system functionality. A Python script for converting XML to the DSS format was reviewed, marking progress on a core integration piece. Several specific tickets were addressed concerning data validation and invoice processing. - **XML to DSS Mapping Progress:** A developer has shared a Python script and mapping documentation for converting data from XML to the DSS and HJS formats. This foundational work for data interoperability will be examined in detail to inform the next implementation steps. - **Resolving Invoice and Data Validation Issues:** Issues preventing invoices from being sent to DSS were investigated and resolved, with code fixes deployed to test environments. Additionally, a fix was implemented and pushed for a data validation problem related to build status filters in DSS, ensuring all filtering options work correctly for users. - **Tracking User Actions for Invoices:** Development was completed on a feature to track users when they upload invoices directly to TSS. While functional in development, a display issue with user columns in the test environment requires further investigation into the test database. ### Build Template Feature Development A major initiative to surface and eventually allow management of "build templates" within the user interface was planned. The discussion clarified that the data already exists and is used within DSS, so the initial phase focuses on making it visible to operators. - **Phased Implementation Approach:** The feature will be developed in stages, beginning with a read-only view of existing build template data in a new UI page. Subsequent phases will enable users to edit templates and finally create new ones, addressing a critical gap as these templates are currently only manageable in a legacy system. - **Clarifying Requirements and Data Source:** To proceed, developers will analyze the existing DSS API, specifically the import and build template processes, to understand how the data is currently populated and structured. This will inform the design of the new interface. ### Code Deployment and Process Improvements The team identified a need to improve the merging and deployment workflow to prevent environment synchronization issues and reduce manual conflict resolution. - **Merging Open Changes to Production:** A directive was given to merge all pending DSS and FTG changes to the production environment to ensure all environments are up-to-date and consistent. - **Establishing a Better Merge Process:** To avoid future conflicts and keep development branches current, a new process was suggested. Developers should regularly rebase their feature branches against the production codebase, especially if a pull request remains open for several days, to minimize integration headaches.
UBM Planning
## Summary The meeting focused on investigating and diagnosing a reported data discrepancy between different systems, specifically concerning date and timestamp values in financial or transactional records. ### Initial Problem Statement: Data Discrepancy in Reports A core issue was identified where data shown in a "UBM Processing Report" does not match the data received from external APIs or seen in other platforms like PayClearly. The discrepancy manifests in date columns, with mismatches observed in fields such as "Loaded on" and "Tracked at" dates. - **Discrepancy Evidence:** The team examined an Excel file provided by the client, which highlighted specific rows where dates and times did not align. For instance, one record showed a "Loaded on" time of "11:27" in one system but "11:26" in another, indicating a difference not solely attributable to simple time zone shifts. - **Core Question:** The primary challenge was determining the exact source of the data in the "UBM Processing Report" and the "PC Details" column to understand why the values differ from the data ingested via API. ### Investigation into Data Sources and Systems A significant portion of the discussion was dedicated to tracing the origin of the conflicting data points to pinpoint where the transformation or error occurs. - **Identifying the "PC Data" Source:** There was uncertainty about where the data in the "PC Details" column originates. It was clarified that while data is received from an external API (the "track API"), there is no known "PC detail report" within the internal UBM system. This suggests the client might be using a separate, unidentified Power BI report or data source. - **Comparing API Data with UI:** Analysis confirmed that the timestamp data received from the external API ("funded at" dates) matches what is displayed within the internal platform's UI, suggesting the ingestion pipeline is intact. - **UBM Report Ambiguity:** The team could not definitively identify what the "UBM Processing Report" referenced in the client's data represents, making it difficult to audit the transformation logic applied to the data in that specific output. ### Analysis of Time Zone Conversion as a Potential Cause The possibility that the discrepancies were caused by automatic time zone conversions was thoroughly explored, as this is a common issue with date-time data across regions. - **Platform Conversion Logic:** It was confirmed that certain pages within the application (like the Bill Summary page) automatically convert UTC timestamps from the database to the viewer's local time zone. This means users in different locations (e.g., the US, Romania, or India) would see different localized dates and times for the same underlying UTC event. - **Excel's Auto-Conversion Risk:** A critical technical point was raised that Microsoft Excel can automatically convert date-time data to the user's local system time zone upon opening a file. This could introduce a discrepancy if the data in the file was stored in UTC but displayed in a local time zone without explicit formatting. - **Assessment of Fit:** While time zone conversion could explain date shifts (e.g., a UTC time rendering as the previous day in a later timezone), it was noted that some observed discrepancies involved differences in minutes (e.g., 11:26 vs. 11:27), which cannot be explained by whole-hour time zone adjustments alone. ### Examining Specific Client Data Points The team drilled down into a specific example from the client's spreadsheet to contextualize the theories. - **Case Study of Bill 1105915:** Using a specific bill ID, it was demonstrated how the "Loaded on" date appears as "11/27" in one location and "11/26" in another, directly illustrating the reported problem. - **Contradiction in Theory:** For the example examined, the "Tracked at" field in the API data (in UNIX/UTC format) should have converted to a "12/11" date. However, the client's data showed "12/10," reinforcing that the issue may not be a straightforward display conversion but rather the use of a different source data point altogether (e.g., a "submitted date" versus a "tracked at" date). ### Conclusions and Required Next Steps The discussion concluded by defining the blockers to resolution and the necessary information to proceed. - **Root Cause Hypothesis:** The leading theory is that the client's "PC Detail Report" uses a different source date field (e.g., a "clearing date" or "submitted date") than the "tracked at" field provided by the API and displayed in the application UI. - **Critical Missing Information:** A resolution cannot be reached without first understanding the exact source and definition of the data presented in the client's "UBM Processing Report" and "PC Details" column. The team needs to know which specific database field or report logic generates those values. - **Recommended Action:** It was agreed that the fastest path forward is to request the client provide screenshots of the specific application pages or report definitions in question. This will allow for a direct comparison between what they see in the system interface and the data in their discrepancy report.
Simon Account list
## Customer The customer is a significant client with a complex account structure, managing a large portfolio that requires meticulous oversight of invoicing, billing, and data integrity. Their operations involve high-volume transaction processing and detailed reporting, indicating a mature, data-driven approach to managing their financial and service obligations. They are deeply integrated into the platform, utilizing it for core operational functions and expecting a high degree of reliability and accuracy. ## Success The most significant achievement has been the strategic move toward establishing a single, verified source of truth for all account data. This initiative involves consolidating scattered reports and lists into a master account list, which will serve as the definitive foundation for all future reporting and operations. This centralization is seen as a critical step to eliminate inconsistencies and ensure that all teams are working from the same, accurate dataset, thereby improving overall decision-making and operational efficiency. The proactive effort to clean up duplicate accounts and validate data integrity further underscores progress toward this goal of reliable, unified information. ## Challenge The primary and most persistent challenge revolves around **data quality and system reliability**. A major issue has been the existence of duplicate accounts within the system, which distorts reporting and creates operational overhead for cleanup. This is compounded by technical bugs, notably within the web download helper tool, which have impeded data processing and led to missed or inaccurate invoices. Furthermore, unplanned system outages have directly impacted billing cycles and, critically, the communication flow to the customer regarding these issues. This lack of proactive transparency during downtime has led to frustration and eroded trust, highlighting a gap in crisis management protocols. ## Goals The customer's key objectives are centered on achieving operational excellence and system dependability. Their goals include: - **Data Integrity:** To have a completely clean, accurate, and validated master list of all accounts, free from duplicates and errors, forming a reliable foundation for all processes. - **Process Reliability:** To ensure all invoices are processed correctly and on time, eliminating the need for manual intervention or "mocks" due to system failures or data gaps. - **System Stability:** To experience consistent platform uptime and performance, with robust technical infrastructure that prevents bugs and outages from affecting core workflows. - **Proactive Communication:** To receive timely, transparent communication during any service disruption, allowing them to manage their own downstream processes and expectations effectively. - **Scalable Management:** To seamlessly onboard and manage new, large-volume accounts (like the upcoming replacement for a major client) without a degradation in service quality or an increase in operational friction.
DSS Daily Status
## **Summary** The meeting focused on reviewing the status of ongoing technical projects and bug fixes, with a particular emphasis on data reporting and system integration challenges. Key topics included progress on a new Power BI report for tracking late invoices, resolving discrepancies in invoice data synchronization, and planning a significant enhancement to the DSS system to better manage and display account information. The discussion also touched on upcoming national holidays affecting the team's schedule. ### **Status Updates on Technical Work** The session began with progress reports on several active tickets and development tasks. - **New Power BI Report Development:** Work is ongoing to create a new Power BI report to replace older, complex reporting methods. A critical issue was identified where three specific accounts were not appearing in the new report due to a one-day difference in the logic calculating the "expected date." The report's core function is to flag accounts where the expected invoice date is in the past. A new page is also being added to show *all* accounts for a customer without any date-based filters, aiming to create a single, comprehensive source of truth for account data. - **FDG Connect Bug Fixes:** Several user interface and data flow bugs in the FDG Connect system were addressed. Updates were made to ensure proper vendor selection and pagination reset in task lists. Furthermore, an investigation was conducted into a reported issue where manually entered invoice dates during bill upload were being overwritten by dates extracted by AI during processing in the DSS system. - **DSS User Tracking Feature:** Development is still in progress for a feature that logs which user uploaded an invoice directly through the DSS system. This work is proceeding on a part-time basis. ### **Data Synchronization and Invoice Date Issues** A significant portion of the discussion centered on data integrity issues between the FDG Connect and DSS systems. - The core problem involves a mismatch between invoice dates. When a user manually inputs an invoice date while uploading a bill via FDG Connect, this date is sometimes replaced by the "bill date" extracted from the PDF by an AI automation process when the data is pushed to DSS. - This is perceived as a data overwrite issue, though it was clarified that the originally reported bug might have been related to tasks reappearing in FDG Connect due to failed uploads, not just the date change itself. More information from the operations team was requested to debug this effectively. ### **Strategic Enhancement for DSS Account Management** A major strategic initiative was outlined to solve fundamental data context problems within the DSS (Data and Decision Support) system. - **The Core Problem:** DSS currently lacks a clear view of all pre-existing service accounts for customers, which are stored as "build templates" in the legacy DDIS system. This missing context makes it difficult to verify if all expected bills are being captured. - **Proposed Solution:** A three-pronged approach was agreed upon: 1. **Expose Existing Data:** Create a new page in the DSS UI to display all customer accounts and vendors, pulling data from the existing templates that are already being used post-IPS processing. 2. **Enable Template Management:** Allow users to modify these build templates directly within DSS, establishing them as the definitive source of truth for account setups. 3. **Improve Data Rigidity:** Ensure the system is more careful in creating new templates, as continuous creation indicates a failure to match incoming data to existing setups. The goal is to start development on this critical feature the following Monday. ### **Analysis of Power BI Report Logic** The conversation delved into the specific logic causing discrepancies between the old and new Power BI reports. - The discrepancy for the three missing accounts was traced to a **one-day calculation difference** in the expected date. In the old report, an account with an expected date of December 10th appeared as "late" on December 11th. The new report's logic showed the expected date as December 11th for the same account, so it did not appear as late until December 12th. - The strategic direction shifted towards **building a foundational "account list" view first**. This comprehensive list of all accounts, without any late-payment logic applied, will serve as the primary data source. The "late arriving bills" view will then become a simple filter or slice of this master list, ensuring consistency and accuracy. ### **Operational and Administrative Notes** The meeting concluded with brief administrative coordination. - The team confirmed upcoming company holidays for December 25th and 26th, though there was a need for clarification on whether both days were officially off. - A separate discussion occurred regarding internal tool access, noting that attempts to set up a local virtual machine to run Power BI on a company laptop triggered a security alert from the cybersecurity team, highlighting the locked-down nature of the work environment.
Ops Triage Sync
## Customer The customer is an organization using a utility bill management and payment automation platform. Their primary role involves overseeing and managing a high volume of utility bill transactions across multiple locations, vendors, and utility types. The platform is intended to centralize bill processing, data extraction, analysis, and automated payments for their operational portfolio. ## Success The most significant success achieved is the fundamental automation of bill payment processing. Despite facing numerous operational challenges, the platform enables the customer to manage and pay a large volume of utility invoices that would otherwise require an immense manual effort. It facilitates the handling of transactions for numerous accounts and helps in identifying billing data, even amidst complex scenarios involving multiple vendors and utility commodities. The system also supports foundational compliance work, such as preparing for SOC 2 Type II certification, which is a critical requirement for enterprise credibility and risk management. ## Challenge The most prominent and recurring challenge is the unreliability and lack of data integrity within the system, which directly impacts core operations. A primary issue is the frequent misclassification of bill data-where utility consumption units, costs, or vendor details are incorrectly extracted or mapped. This leads to downstream errors in analysis, reporting, and automated payments. Furthermore, there are systemic problems with payment processing, including invoices not being paid on time due to issues with third-party payment partners, which risks service disruptions. The team is also grappling with "bad data" that entered the system historically, requiring significant manual review and cleanup. The lack of a single source of truth or a rigid "bill template" for each account-vendor-utility combination means errors are not prevented at the point of entry, creating a continuous cycle of firefighting and customer complaints. ## Goals Key goals for the customer moving forward include: - **Achieving SOC 2 Type II Compliance:** To meet a contractual obligation and solidify trust with enterprise clients by obtaining this security certification within the next year. - **Establishing System Reliability and Data Integrity:** Implementing a systematic framework to define correct billing parameters for each account and proactively flag or block transactions that deviate from this setup, thereby preventing errors before they propagate. - **Eliminating Payment Failures:** Resolving the root causes of transaction failures with payment partners to ensure bills are paid reliably and on time, avoiding service interruptions for their operations. - **Executing a Strategic Cleanup:** Developing and enacting a plan to methodically review and correct historical data inaccuracies, particularly for key accounts, to restore confidence in the platform's analytics and reporting. - **Improving Proactive Monitoring:** Enhancing the use of existing analytics and reporting tools to proactively identify missing invoices or data anomalies before customers report them, shifting from a reactive to a proactive operational stance.
DSS Planning
## **Summary** The meeting focused on strategies to enhance and expand the data ingestion ecosystem, primarily for utility invoice processing. A central theme was enabling new, automated points of entry for bill data from various sources, including internal Constellation networks and external third-party vendors, while streamlining and refactoring existing workflows to avoid redundant processing steps. The discussion covered technical architecture, deployment considerations, and specific implementation pathways for different data sources. ### **Enhancing Data Ingestion for Constellation Builds** The team examined methods to automate the retrieval and processing of customer bill data directly from internal Constellation sources. This initiative aims to bypass manual downloads and unnecessary Optical Character Recognition (OCR) processing by leveraging pre-extracted data. - **Current Integration & Strategy:** A connection has been established with the Data Access Gateway (DAC) team, providing access to a Databricks environment that houses Constellation bill data, including raw XML. The proposed approach involves creating a service that programmatically queries this source for available invoices based on customer accounts, pulling the data in bulk, and classifying it internally. - **Key Advantage:** This method provides greater control over the ingestion logic and timing, as the data container is continuously populated by Constellation independently. The team can design the pull mechanism to fit existing workflow schedules rather than relying on external notifications or fixed prescriptions from the data provider. - **Processing Goal:** The ideal end-state is for this ingested XML data to be transformed into a structured JSON format (similar to the VSS output) and fed directly into downstream systems without passing through the traditional DSS extraction pipeline, saving time and resources. ### **Architectural Deployment: Green Cloud vs. Red Cloud** A significant portion of the discussion was dedicated to determining the optimal hosting environment for the new ingestion services, weighing the concepts of "Green Cloud" and "Red Cloud" within the Constellation network. - **Fundamental Difference:** The core distinction lies in network trust and perimeter. **Green Cloud** resources are hosted within the internal Constellation enterprise network, allowing them to communicate freely with other internal services without firewall impediments. In contrast, **Red Cloud** resources are considered external, residing outside the trusted perimeter and must access internal services through strict, zero-trust authentication protocols, similar to any public third-party service. - **Strategic Decision Point:** The team debated whether to build the new service within the existing external ecosystem (analogous to the current FDG tenant) or as a greenfield project inside the Constellation tenant. Building internally (Green Cloud) offers smoother long-term integration as other systems migrate, but risks creating an isolated component if broader migration is delayed. The practical, short-term suggestion is to develop the service in the existing external ecosystem to accelerate delivery, acknowledging that firewall requests and ticketing processes will be required to access internal Constellation services from the outside. ### **Integrating Third-Party Vendor Data (e.g., Utility API)** The conversation expanded to include automated data pulls from external utility data aggregators, which present a similar architectural challenge but with different technical constraints. - **Use Case & Scope:** Services like Utility API can provide bulk PDFs and pre-extracted JSON data for a specific set of large utility vendors, covering areas like power and gas but not water or co-ops. This makes them suitable for targeted Proof of Concepts (POCs) rather than universal solutions. - **Proposed Implementation Flow:** The credentials stored in the system for a customer would be used to authenticate with the third-party API. The service would then fetch the electronic bills (PDF or JSON) for specified accounts and dates. A crucial design decision is to **avoid creating a separate, unique import flow** within the DDIS application. Instead, the goal is to leverage the existing DSS codebase to handle this new data source. - **Long-term Vision:** The objective is to treat vendor-specific API integrations as another automated data retrieval channel that feeds into a centralized, refactored ingestion workflow, maintaining consistency and reducing maintenance overhead. ### **Refactoring DSS Workflows for Future Flexibility** A consensus emerged on the need to modify the core Document Processing Service (DSS) to natively support these new, diverse data ingestion methods without relying on legacy processing steps. - **Bypassing Unnecessary Steps:** For data sources like Constellation XML or third-party JSON where extraction is already complete, the workflow must "short-circuit" the OCR and Intelligent Processing Service (IPS) steps. The data should flow directly to the transformation and population stages. - **Centralizing Logic:** The refactoring aims to make DSS the central hub for all data retrieval and initial ingestion. This creates a "black box" for other applications, which can simply request data without concern for its source. The service would use the shared database to determine client-vendor mappings and trigger appropriate fetch logic. - **Next Steps:** The immediate action is to begin the DSS refactoring project with these specific integration goals in mind, creating a flexible foundation upon which the Constellation and third-party vendor features can be built. ### **Enabling the UBM Drag-and-Drop Feature** The requirements for enabling a drag-and-drop invoice upload feature within the UBM (Utility Bill Management) application were clarified, highlighting it as a relatively straightforward integration. - **Minimal Requirements:** For a successful upload, the frontend needs to send the PDF file along with three key metadata points: Vendor Account Number, Client ID, and Vendor ID. These fields are helpful for routing but are optional; the system can attempt to derive them from the invoice content if not provided. - **Simple Integration Path:** UBM developers would need to call a single, well-documented DSS API endpoint. The process is designed to function like a "Dropbox," where the primary confirmation is a successful upload receipt. From there, the invoice enters the standard DSS processing pipeline. - **Authentication:** Access would be secured via a standard OAuth 2.0 flow, where UBM obtains a client ID and secret to generate tokens for API calls, a common pattern their developers are expected to be familiar with. The meeting concluded with alignment on the strategic direction: to refactor DSS as the central ingestion engine capable of supporting multiple automated data entry points, thereby enhancing efficiency and scalability for processing utility bills from an expanding array of sources.
Review Problem Bills
## **Summary** ### **The Core Problem: Proliferation of Conflicting Reports** The central issue driving the meeting is the overwhelming number of disparate and unreliable reports for tracking invoice data and customer accounts, which has led to confusion, inefficiency, and a lack of confidence in the data. The absence of a single, authoritative source of truth makes it impossible to accurately manage client accounts, verify invoice receipt, and report on operational status. - **Multiple reports create chaos**: There are at least five different reports (e.g., Invoice Lifecycle, Invoice by Due Date Estimate, the legacy Lab/DSNUBM report, Client Account Status) all purporting to track similar data but yielding conflicting results. This leads to unproductive meetings where stakeholders debate which report is correct rather than solving problems. - **Critical data gaps are exposed**: A stark example is the Victra account, which has approximately 2700 accounts. However, key reports like the invoice downloads Power BI report show only about 900, and another legacy report shows only 65. This massive discrepancy underscores the severe lack of visibility into what invoices have been received, how they were obtained (web download, mail redirect, etc.), and their current status. - **The business impact is significant**: This data confusion prevents effective operations, hinders the ability to ensure all client invoices are captured, and damages credibility with both internal teams and external clients who receive inconsistent information. ### **Deep Dive into Existing Reports and Their Shortcomings** The discussion involved a live review of several specific reports to diagnose their failures and understand what data they actually contain. The goal was to assess which, if any, could serve as a foundation for a unified solution. - **Invoice by Due Date Estimate Report**: This report appears to be non-functional for current needs because it only shows tasks that require action, not a complete history of all processed invoices. It is therefore useless for creating a comprehensive account list. - **Invoice Lifecycle Report**: This report provides a more inclusive view of processed invoices but is structured around download dates rather than invoice dates, making it difficult for the operations team to use for their monthly reconciliation processes. Its filters and data sources need clarification. - **Legacy "Late Arriving Bills" (Lab/DSNUBM) Report**: This is the report specifically requested by the operations team as their historical source of truth. However, it seems to only display a subset of "late" bills. Even when filters are cleared, it shows a fraction of the expected accounts (e.g., 65 for Victra instead of ~2700), indicating it is either broken or fundamentally designed not to show all accounts. - **Client Account Status (Non-Tabbed) Report**: Among all reports examined, this one came closest to showing the expected volume of accounts for Victra (over 3100), though it includes duplicates and inactive/closed accounts. It was used as a temporary, stop-gap solution but is not considered the final answer. ### **Establishing a Unified Source of Truth** The primary resolution from the meeting was to designate and repair one central report to serve as the definitive source for all account and invoice data, consolidating the useful elements from the various existing reports. - **The chosen foundation is the Lab/DSNUBM report**: The team agreed to update and fix the legacy "Late Arriving Bills" report, as it is the one explicitly requested by the operations team. The objective is to transform it from a "late bills only" view into a complete master list of all accounts. - **Consolidation and archival plan**: To prevent future confusion, all other redundant or obsolete reports will be moved to an archive folder. Only a small set of vetted, unified reports will remain active, ensuring everyone references the same data. - **Immediate action items defined**: The development team will conduct a deep dive into the data sources and calculations powering the Lab report. This involves defining each column, verifying its source (e.g., UBM, DS, legacy SQL), and understanding calculations like "Short Payment Window Delay." Ambiguous or unverified columns may be hidden initially to launch a functional version faster. ### **Operational Challenges and Process Improvements** Beyond the core reporting issue, the conversation highlighted several adjacent operational and systemic problems that complicate data integrity. - **Account closure process is broken**: A significant issue is that accounts can be marked as closed in one system (e.g., UBM) but not communicated or reflected in another (e.g., Data Services). This leads to phantom accounts in reports and operational errors. The directive is to stop merely notating closures and to implement a systematic, cross-platform closure process. - **Handling of "Mock" invoices**: For clients like Victra, "mock" invoices are sometimes used as placeholders. The reporting system must be able to distinguish between mock and actual fiscal invoices, and flag accounts where a mock invoice is still pending replacement with the real one, to avoid giving a false sense of completeness. - **Automation with BDE**: There is a manual process for sending account closure and update CSVs to BDE. While an API integration was discussed months ago, it has stalled. The team will re-engage to scope out automating this data exchange, moving away from error-prone manual emails. - **Improving communication with Ops teams**: To address frustration over perceived slow progress on issue fixes, a new process was proposed: instituting weekly priority-setting meetings with the operations leads (like Afton and Rachel). This aims to align development priorities, demonstrate progress, and involve them in the process to build trust and manage expectations. ### **Client-Specific Data Issues and Broader Implications** The reporting problems are symptomatic of deeper data quality issues affecting specific, high-priority clients, raising concerns about contractual and reputational risk. - **Victra's missing invoices**: The discrepancy in Victra's account count is the most pressing example, but it reflects a company-wide problem of not having a reliable master account list for any client. - **Munson and St. Elizabeth audit pressures**: For these key clients, there is immense pressure to conduct a full audit of invoices, usage, and costs to satisfy channel partners and secure contracts. The team recognizes the task is monumental, as historical data is messy. The strategy is to focus on known problem patterns (e.g., propane cylinder line items, double-counted "use" and "use cost" entries) and audit those systematically, rather than attempting an impossible line-by-line review of thousands of bills. - **FreshMark escalation**: A query from the FreshMark client about a discrepancy between the number of invoices and "bill blocks" shown in the analytics tab has escalated rapidly within the organization. This underscores how data inconsistencies can quickly become major credibility issues requiring immediate attention. ### **Strategic Direction and Next Steps** The meeting concluded with a clear, agreed-upon path forward to regain control over data and reporting, emphasizing a move towards automation and systemic fixes over patching legacy processes. - **Fix the foundation first**: All efforts will be initially directed at creating the single, unified "source of truth" report based on the Lab report framework. This is the critical prerequisite for solving other problems. - **Move forward, not backward**: There is a strong consensus against merely fixing old systems to work as they did in the past. The goal is to build more automated, efficient, and scalable processes, even if it requires deprecating old, familiar reports. - **Embrace a unified view**: The future state involves having one master report with different tabs or filters (e.g., "All Accounts" vs. "Late Arriving Bills") to serve various needs, eliminating the redundancy that caused the current confusion. - **Develop a sustainable audit methodology**: For client-specific data issues, the team will work on developing a repeatable, logic-based audit process (e.g., flagging accounts that deviate from a defined "correct setup") that can be demonstrated to partners as evidence of diligent data management.
UBM Errors Check-in
## **Planning for 2026: Strategic Initiatives and Core Challenges** This meeting was a strategic planning session focused on identifying and addressing critical issues in preparation for 2026. The goal was to catalog all major topics requiring attention-including customer expectations, sales and CSM requests, and internal technical debt-to avoid being blindsided by unforeseen challenges. Discussion centered on decoupling systemic pain points, improving visibility across platforms, and establishing a foundation for new features. The conversation emphasized moving away from reactive firefighting and toward proactive, systematic solutions. ### **Bill Pay Feature Isolation and Enhanced Visibility** A primary focus was on the need to separate the operational and reporting issues specific to bill pay customers from the general analytics customer base, as current dependencies mean priorities for one group negatively impact the other. - **Decoupling Bill Pay Pain Points:** A core strategy involves creating dedicated workflows and views for bill pay to prevent issues in that segment from disrupting service for other customers. This includes potential UI changes, like separate tabs for emergency payments, to clarify data. - **Developing a Forensic Bill Lifecycle View:** There is a critical need for a comprehensive view to track a bill from download through to payment. This "invoice life cycle" report would answer fundamental customer questions about whether a bill was pulled, if it was late, and the reason for any delays, aiming to replace manual, week-long investigations with instant, asynchronous data access. - **Integrating Payclearly Data:** Work is underway to integrate Payclearly's API endpoints to pull funding and check reconciliation data. A temporary daily extract solution is in place, with a goal to move toward real-time updates via webhooks, ultimately enabling automatic matching of batch funding information with deposits. ### **Errors, Validations, and Manual Process Automation** The current approach to data errors and validations is creating significant manual overhead and needs a complete review, especially to differentiate between bill pay and other customers. - **Comprehensive Review of Validations:** A planned workshop with operations teams (Tim, Tara, Mary) is needed to audit all existing error checks and validations. The goal is to determine what still makes sense, what is bill-pay specific, and to establish a clear workflow for bulk fixes to avoid creating backlogs. - **Addressing Systemic Data Issues:** Many data integrity problems stem from validations outside the current "integrity check" scope. The focus is on moving beyond one-off fixes to establish automated, bulk-resolution capabilities before turning on stricter validation rules for postpaid customers. - **Long-term AI Strategy for Smarter Validations:** Looking beyond 2026, there is interest in exploring AI to create smarter validation rules that go beyond simple one-to-one matching, potentially addressing complex issues like duplicate payment detection and virtual account mapping. ### **Data Flow and Virtual Account Challenges** Inconsistent data flow between systems (DSS, DDS, UBM) and the problematic "virtual account" logic are root causes of many data quality and reporting issues. - **Refactoring System Handoffs:** A key initiative is improving the integration and handshake process between UBM, DSS, and DDS, including refactoring the output service. This aims to ensure consistency and reliable data transfer. - **Re-engineering the "Bill Template" Concept:** A proposed solution to virtual account proliferation is to reintroduce and modernize the "bill template" or "setup bill" concept from DDS into DSS. This template would provide a consistent context (e.g., meter ID, account attributes) for all subsequent bills from the same vendor and account, instructing AI processing to inherit these attributes even if they are missing from the PDF. - **Linking Data Across Three Systems:** To troubleshoot issues efficiently, there is a need to develop a view that can merge the history of a single bill across DSS (AI-processed JSON), DDS (manual updates), and UBM (final state). This tri-system visibility would drastically reduce the time to diagnose why a bill is "wrong." ### **Reporting Engine and API Development** The reporting infrastructure is being modernized to provide more flexibility and self-service capabilities for both internal teams and customers. - **Building a Read-Only Reporting API:** The team is proceeding with a spike to create an API that exposes the existing reporting database views. This API-first approach, hosted within Kubernetes for environment-agnostic deployment, is valued for enabling future customer integrations and is targeted for development by May. - **Evaluating a Drag-and-Drop Report Builder:** While the API is a priority, the additional value of a user-friendly UX layer (a drag-and-drop report builder) on top of it is under evaluation. A stronger business case with specific customer examples is needed to prioritize this over providing the API and having internal resources build custom reports as needed. ### **New Feature Exploration: Contract Management** A completely new module for contract management was introduced as a strategic initiative for premium customers, requiring ground-up design and development. - **MVP as a Document Management System:** The initial phase is envisioned as a document repository within UBM, similar to the bills tab, where customers can manually upload contract PDFs and associate metadata (vendor, utility type, contract start/end dates, pricing type like fixed or index). - **Future Integration and AI Processing:** The long-term vision includes integrating with Constellation's systems to automatically pull contract data for their customers and potentially using AI to classify contract terms from PDFs. Initial efforts will focus on defining the data model and UI without being blocked by external legal approvals.
DS Reports
## **Data Services Report Crisis and the Need for a Single Source of Truth** The central and most urgent topic of the meeting was the complete breakdown in reliable reporting from Data Services (DS). Multiple reports (like the "lab deriving bills report" and "invoice download report") are either non-functional, outdated, or incomprehensible to the operations team, leading to a critical lack of visibility. This has resulted in heated external meetings where teams cannot account for invoice statuses, directly threatening client relationships and revenue. - **Multiple Broken Reports:** There is no single, reliable report to track all customer accounts and their last invoice download dates. Teams are forced to scramble between different, confusing reports (e.g., "invoice due date estimate," "lab deriving bills"), none of which provide a complete or accurate picture. - **Operational Blindness:** This reporting failure means the team does not know which invoices are late, which have never been downloaded, or the overall status of accounts. A specific example was the discovery of 957 accounts in a report for Victra, a client known to have far more, highlighting the data discrepancy. - **Immediate Demand for Resolution:** The immediate action required is to identify, clean up, and establish **one definitive report** that serves as the sole source of truth for invoice status and account management. The preference is to fix and use the existing "UBM Billing Accounts" report if possible, rather than create a new one. ## **Critical Client-Specific Invoice Emergencies** Several key clients are at risk due to processing delays and errors, with serious financial and legal consequences. The team is actively firefighting these situations, which are exacerbated by the poor reporting. - **Victra's Duplicate "Special" Invoices:** Two invoices for the same Victra account were incorrectly processed as "special," risking non-payment and service disconnection. The root cause is unknown, and it requires emergency attention to prevent account shutdown. - **Medline's Imminent Legal Threat:** Medline is explicitly threatening to sue over unpaid invoices. The contract is relatively small, leading to a stark assessment that the company might be better off canceling it rather than facing legal costs. Daily clearing of Medline invoices is now an absolute priority to avoid escalation. - **Bannon's Contractual and Processing Standoff:** Bannon is requesting an extension to process invoices through April, but the team is firmly capping it at the end of February. A further complication involves Bannon asking for confidential bank account information sharing with another supplier (PayCleary), a request that was firmly denied. - **Jacob's Neglected Backlog:** The Jacob account has a significant backlog of invoices (starting at ~400) that has been largely ignored. While some progress has been made, the impending influx of 800 newly discovered missing accounts threatens to inundate this workflow completely. ## **Systemic Gaps Between Data Services and UBM** Underlying the reporting crisis is a deeper, systemic disconnect between the Data Services (DS) system, the DL Helper tool, and the UBM platform. This architectural flaw is the root cause of missing accounts and inaccurate tracking. - **The "800 Missing Accounts" Problem:** A major issue identified is approximately 800 accounts where credentials exist in DL Helper but are not properly linked as tasks for download in Data Services. This means the accounts are technically in the system but are invisible to the download team's workflow. - **Dichotomy of Systems:** The DL Helper (where downloaders work) and Data Services do not perfectly sync. An account can exist in UBM/DS but not in DL Helper, and vice-versa. This leads to a fundamental mismatch in what each team considers the "source of truth" and causes accounts to be missed entirely. - **Linking as a Temporary Fix:** A proposed immediate workaround is to prioritize bulk-linking activities between systems to close the gap, though a permanent architectural solution is needed. ## **Team Coordination and Workload Management** In response to the crises, the discussion focused on re-allocating team resources to tackle the most pressing client backlogs and establish clearer internal processes. - **Prioritizing High-Risk Clients:** Specific personnel were assigned to focus exclusively on critical accounts: two people dedicated to Extension, attention shifted to Jacob, and Karen tagged for Medline. The team on Bannon continues its work. - **Clearing Ops Review and Closing Accounts:** A process issue was highlighted where accounts needing closure are stuck in "Ops Review" instead of being formally closed in the system and synced to Data Services. The directive is to bulk-close where possible and clean up linking afterward. - **Staffing Adjustments:** New downloaders are being onboarded to the DL Helper team to increase capacity. Internal training is also happening, such as cross-training Jacqueline on bill pay tasks to help with the workload. - **Communication Protocol:** A clarification was made that client invoice requests should be routed through a single point of contact (Afton) to manage properly, rather than going directly to individual data specialists, to maintain accountability and visibility.
DSS Daily Status
## **Summary** The meeting centered on improving data processing systems, with a strong focus on fixing integration issues, enhancing application functionalities, and clearing the path for deploying tested features to production. Key topics included resolving a critical data ingestion problem with a specific vendor, implementing user tracking in an upload system, and finalizing a major user interface upgrade for bill management. The team also reviewed the status of several tickets in testing and planned their deployment. ### **Constellation Invoice Integration & Data Improvements** A persistent issue with automatically processing invoices from Constellation was addressed, alongside requests for historical data from other vendors. The system has been incorrectly identifying services ("hallucinating") during IPS processing despite having the correct information. The plan is to use an official Constellation API to fetch XML invoices, convert them to the required JSON format, and push them to DSS to ensure timely and accurate billing. Additionally, there was a request for historical CSV data from Hexable (and potentially other vendors like St. Elizabeth and Manson) to analyze past discrepancies, highlighting that manual verification had previously uncovered errors needing correction. ### **FTG Connect User Attribute Tracking** Progress was reported on a ticket to record which user uploads a bill via FTG Connect's individual upload function. The current system logs the creator as "File Watcher" in the primary database, obscuring the actual user. The solution involves adding a new column in the DSS UI to display the true user who initiated the upload. Development was temporarily hindered by debugging basic functionality errors in the local FTG Connect environment, but the approach is clear and work is expected to resume. ### **DSS Web Downloader: Client View Enhancement** A significant enhancement to the DSS Web Downloader was presented and deemed ready for production. The update introduces a "View by" dropdown, allowing the operations team to filter and view bills grouped by *client* in addition to the existing vendor-based view. This solves a major operational pain point where gathering all bills for a single client required searching across multiple vendors. The implementation includes full sorting functionality (alphabetical, invoice count) and correctly filters the bill list based on the selected client. A related credential management page currently remains vendor-focused, which was noted as a potential future enhancement. ### **Testing Review & Production Deployment** Several completed features in the testing environment were reviewed for final validation and deployment approval: - **Duplicate Invoice Check:** A feature to check for duplicates based on invoice date, due date, and amount after DSS processing was confirmed as complete. - **IPS Status Filters:** New filters in DSS for "Complete" and "Failed" build statuses from IPS were tested and approved for deployment. This required coordination to ensure changes were synchronized across the IPS, DSS API, and UI. - **Build Status Column & Archiving:** The addition of a "DSS Status" column to the UI and a new archiving function for bills were demonstrated. The archive feature successfully hides bills from the main list, with an option to unarchive them. - **Other Tickets:** Items like upload tracking for file exchange and late fees separation were verified as done, while a redirect for pre-audit failures was identified as requiring a specific user action to fix. ### **Tool Overview & Documentation Gap** A high-level overview of the DSS Web Downloader tool was provided for context. Its purpose is to assist the operations team in managing the periodic download of client bills from various vendor portals using stored credentials and then uploading them to data processing systems. It was acknowledged that comprehensive technical documentation for this application does not currently exist.
Review Problem Bills
## **Double-Counted Usage: The Primary Data Integrity Issue** The most significant and recurring problem identified was the double-counting of energy consumption within bill data. This error inflates total usage figures, leading to inaccurate reporting and analysis. - **Manifestation in various forms:** The issue presented itself in two main ways. First, within a single billing line item, both a 'use' observation (e.g., kWh) and a 'use cost' observation were incorrectly captured for the same consumption. Second, entire bill components that should have been separated (like supply and distribution) were incorrectly combined into a single "full service" block, causing their usage to be counted together incorrectly. - **Correction methodology:** The established fix is to **remove the standalone 'use' observation line and retain only the 'use cost' line**. This preserves the financial data while eliminating the duplicate consumption entry. The discussion noted a historical shift in how data services captured this information earlier in the year, moving away from a method that used specific 'build use' observation types to prevent double-counting. ## **Full Service vs. Distribution/Supply Breakdown** A critical diagnostic step involves correctly classifying bill types. Mislabeling is a root cause of linking failures between related accounts and of the double-counting issue. - **The "Full Service" mislabeling trap:** A common error is labeling a bill as "full service" when it should be designated as either "distribution" or "supply." This is crucial even for bills that only contain distribution charges, as they must be correctly categorized to later link with a separate supply bill from another provider. Incorrect labeling prevents this automated linking. - **Impact on system integrity:** When bills that should be separate are incorrectly combined under a full service label, it creates systemic errors that cascade through reporting and prevent accurate account reconciliation. ## **Propane and Special Commodity Cases** Processing bills for propane and similar fuels presents unique challenges that require manual scrutiny and specific rules, distinct from standard electric or natural gas bills. - **Service period complexity:** For delivery-based fuels like propane, the service period on a bill must align with the delivery period, spanning from the end date of the last delivery invoice to the end date of the current one. This often requires cross-referencing multiple statements and delivery tickets rather than taking the dates on a single bill at face value. - **Vendor-specific data quality:** The clarity of billing data varies significantly by supplier. For instance, Amerigas bills are typically straightforward, while other suppliers like Ferrellgas may have overlapping or confusing billing statements that demand extra attention to ensure accurate period alignment. ## **Mislabeled Electric and Lighting Accounts** Improper categorization of specific utility account types prevents them from being tracked correctly within the system, as they are designed to be managed separately. - **The consequence of incorrect utility types:** A frequent error is labeling "lighting" accounts as standard "electric." Since lighting (often for outdoor areas) is a distinct utility type, this mislabeling breaks the automated linking process. The same principle applies to other utilities, such as labeling "irrigation" or "fire protection" water lines simply as "water." - **Identifying clues on the bill:** Key indicators for identifying a lighting account include the rate code (often containing abbreviations like "OL" for outdoor lighting) or descriptions in the line items mentioning "lamps," "LED," or "halogen." ## **Historical Data and Mock Bill Issues** A significant data quality gap exists for historical bills imported into the system, which compromises the accuracy of any longitudinal analysis or reporting. - **The mock bill problem:** Historical data is often imported via spreadsheets and used to generate mock bills. A major flaw in this process is that these mock bills frequently capture only the cost information while omitting the corresponding usage data (kWh, therms, etc.). This renders the historical record incomplete and useless for meaningful consumption analysis. - **Compounding future reporting risks:** The prevalence of uncorrected data errors (like double-counting) in current bills, combined with incomplete historical data, sets the stage for a systemic crisis. This will likely surface when clients begin year-end reporting and reconciliation, potentially leading to a large-scale, urgent correction effort.
[EXTERNAL]Updated invitation: Constellation (UBM) Onboarding Call @ Mon Dec 8, 2025 1:30pm - 2pm (CST) (faisal.alahmadi@constellation.com)
## Customer The customer is part of a specific business unit (UBM) within a larger organization that is engaging in a SOC 2 Type II compliance assessment. The individual leading this effort is newly stepping into a role overseeing security and compliance processes, with a focus on consolidating and improving existing frameworks that had been managed across different parts of the business. Their background involves coordinating between engineering, operations, and external partners to establish a more efficient and sustainable compliance program. ## Success A significant prior success highlighted was the strategic decision to unify the compliance effort across all business units under a single external auditor. This consolidation is viewed as a major efficiency win, simplifying the audit process and creating consistency. Furthermore, there is an existing foundation of automation within their compliance processes, particularly in evidence collection, which indicates a mature starting point and a commitment to leveraging technology over manual effort. ## Challenge The most pressing challenge expressed is the significant internal time and resource drain caused by manual compliance activities. The team has been spending an excessive amount of time collecting and managing evidence for audits. This manual overhead is a key pain point the customer is determined to solve, with a clear directive to identify and automate any feasible processes to free up internal engineering and operational resources. ## Goals The customer's primary goals for the engagement are: - **Minimize Internal Effort:** To drastically reduce the time and manual work the internal team dedicates to compliance evidence collection and questionnaire responses. - **Maximize Automation:** To aggressively identify and implement automation opportunities, especially by integrating various systems (like GitHub, AWS, Azure, HR ISPs) into their GRC tool (Vanta) to enable automated control testing. - **Streamlined Process Integration:** To establish a clear, efficient workflow where policies are drafted by the partner, reviewed internally, and managed within their GRC platform with minimal back-and-forth. - **Achieve Timely Compliance:** To successfully complete the SOC 2 Type II audit within a targeted timeline, while being mindful of and aligning with other major internal initiatives, such as a large-scale migration to Azure.
Bill Error Validation
## Customer The customer is a client in the energy and sustainability sector, relying on a comprehensive AI-powered platform for processing and analyzing utility bills (electricity, natural gas, propane, water) across a large portfolio of facilities. They use this data for critical financial planning, reporting to leadership, and advising their own clients, placing immense importance on data accuracy and reliability for maintaining their trusted reputation. ## Success The most significant success highlighted was the platform's role in **averting a major reputational and financial crisis**. When faced with an impending external audit for a key client (Hexpo), the team was able to leverage the system to conduct an intensive, manual validation over a weekend. This effort identified and rectified critical data errors. The intervention was described as "pulling it out of the fire," directly preventing potential severe consequences, including the risk of losing the client or damaging crucial professional relationships. This episode demonstrated the platform's underlying utility as a tool for deep forensic analysis when mobilized correctly, showcasing the vendor's commitment to partnership in a crisis. ## Challenge The most pressing and systemic challenge is **ongoing data inaccuracy and inconsistent ingestion processes**, which severely undermines trust and creates excessive manual workload. Several core issues persist: - **Accuracy Problems:** Errors like mis-mapped propane cylinder units, duplicates, missing data, and commodity-specific issues (e.g., with natural gas) have been found even in recently onboarded clients. This casts doubt on the integrity of all historical data and current dashboards used for financial planning. - **Process Breakdowns:** Bills are experiencing significant delays in ingestion-sometimes taking eight weeks or more-requiring constant manual chasing by the customer's team. The existing process lacks transparency, making it difficult to see where a bill is stuck. - **Resource Drain:** The customer's team is burdened with continuous validation work and troubleshooting, described as "wearing down." This distracts from their core advisory role and creates operational inefficiency. The root cause of these recurring issues is not fully transparent, leading to concern that problems will repeat across the entire client portfolio. ## Goals The customer's immediate and near-term goals are centered on restoring confidence and achieving operational stability: - **Validate Portfolio Integrity:** Extend the validation exercise performed for Hexpo to other major clients (specifically Munson and St. Elizabeth) to understand the scope of data issues and confirm their data is "airtight." - **Achieve Reliable Baseline Data:** Move beyond crisis management to a state where the core data in the system can be trusted implicitly for reporting and client advisement. - **Implement Transparent Processes:** Receive clear timelines and plans for systemic fixes, including improved controls, QA steps for AI-processed data, and better status tracking for bill ingestion. - **Reduce Manual Overhead:** Implement platform improvements that significantly reduce the need for their team to manually track missing bills, validate line items, and chase down errors.
Comm Check
## Current Credential Management and Sync Process The meeting began with a detailed explanation of the current, manual process for synchronizing vendor credentials between the Data Services (DS) database and the UBM system. This process is foundational to how client access is currently managed. - **Daily Manual Sync:** Credentials, including login, password, and URL for each client account, are extracted from the DS database into a CSV file. These are then processed and individually updated or inserted as separate attributes within the UBM platform on a daily basis. - **Handling Shared Accounts:** A notable complexity in the process is that a single vendor account can be associated with multiple UBM clients. The system is designed to map these relationships correctly during the sync. - **Absence of Smartsheet Integration:** It was confirmed that the Smartsheet platform, which is used by other teams, is not currently part of this credential synchronization workflow. ## Critical Security Issue: Internal Portal Credential Exposure A primary focus of the discussion was addressing a significant security vulnerability involving a specific email account (`opsai@...`) used for an internal energy management portal. - **Widespread Unauthorized Access:** The credential for the internal "Energy Manager" portal was found to be synced to approximately 279 different customer accounts within UBM. This meant that if the exposed password was used, an individual could potentially access Constellation utility bills for all UBM customers. - **Source of the Problem:** The credential was actively being pulled from the Data Services database and propagated through the daily sync into the UBM environment. Simply deleting it from UBM would not be a permanent fix, as the sync would repopulate it the next day. - **Immediate Risk Mitigation:** The urgency of the situation was underscored by the fact that the password had already been shared with an unauthorized party, necessitating immediate changes to both the sync process and the credential itself. ## Immediate Actions and Resolution Plan A concrete, immediate action plan was agreed upon to remediate the security vulnerability and prevent its recurrence. - **Removal from Sync Source:** The first and most critical step was to modify the source data scripts used for the daily sync to permanently exclude the `opsai@...` login credentials, preventing them from ever being re-synced into UBM. - **Cleanup of Existing Data:** Following the change to the sync process, the existing entries for this credential would be purged from the UBM production database to remove any outdated and compromised passwords from the system. - **Credential Rotation:** With the sync stopped and the old entries removed, the team would then proceed to change the actual password for the `opsai@...` account, fully securing the internal portal. ## Systemic Challenges and Lack of a Single Source of Truth The conversation highlighted broader, systemic issues within the organization's credential management, which contributed to the security incident and operational inefficiency. - **Disconnected Systems:** Three separate platforms-Data Services (DS), UBM, and Smartsheet-are each treated as a source of truth by different teams (Data Services, UBM Ops, and CSMS, respectively). These systems do not communicate changes automatically. - **Inconsistent Update Paths:** For example, when customers report credential updates to the CSMS team, changes are made only in Smartsheet and are not automatically propagated to DS or UBM, creating data inconsistency and reliability issues. - **Manual Workarounds:** The current daily sync process was established as a manual bridge between DS and UBM, but it does not incorporate Smartsheet, leaving a gap in the data flow and requiring manual intervention. ## Future Direction: Automation and Centralized Management To address the root causes of the disconnect and inefficiency, the discussion concluded with a vision for a more robust, automated solution. - **Proposal for a Centralized Database:** The proposed long-term solution is to establish a single, separate credential database that serves as the definitive source of truth for all systems. Any update in this central repository would be automatically propagated to UBM, DS, and other platforms. - **Enhanced Metadata:** This central system could also include valuable metadata, such as "last updated" timestamps, providing much-needed audit trails and simplifying credential lifecycle management. - **Automation of Manual Processes:** While the current manual sync was acknowledged as not overly time-consuming, its automation was agreed to be beneficial for consistency and to eliminate the risk of human error or forgotten updates. However, it was noted that this automation work is often deprioritized against other development tasks. - **Technical and Operational Hurdles:** Beyond the technical build, the main challenge identified is operational: aligning the different teams to agree on and use a common source of truth, overcoming the existing siloed ownership of credential data.
Daily Priorities
## Summary The meeting centered on addressing several critical operational challenges currently impacting the business, with a strong emphasis on data integrity, payment processing accuracy, and procedural security. Key discussions revolved around resolving widespread data issues for a major client, mitigating urgent payment and disconnection problems, and addressing a significant security vulnerability. The team also reviewed recurring payment errors and planned for standardizing onboarding processes. ### Ongoing Data and Operational Issues The team is managing severe data and operational challenges, primarily with a key account (URA), which has required extensive manual correction work. - **Major Client Data Cleanup:** A significant effort was required over the previous weekend to fix numerous data issues for the URA account. Problems included incorrect data captures, usage discrepancies, unit of measure errors, and invoice number mistakes, particularly affecting natural gas and propane bills where inconsistent vendor conversions caused widespread calculation errors. - **Urgent Payment and Disconnect Notices:** Several accounts, including Medline and Ascension, are at risk due to late payments and potential disconnections. Medline, with approximately 300 outstanding invoices, represents a critical failure point, as the company has been a client for a year and has portal access, making these delays unacceptable. The team is actively working to process a backlog of invoices to prevent further late fees and service interruptions. ### Security Vulnerability in Third-Party Portal A serious confidentiality breach was identified involving shared login credentials for the Constellation EnergyManager (CEP) portal. - **Exposed Credentials:** The shared username and password for the internal Constellation portal, which provides access to all client commodity data, was emailed to an external party. This poses a major contract and confidentiality risk. - **Required System Fix:** An immediate fix is required in the data synchronization process to ensure these shared credentials are excluded from being visible within the internal UBM platform. A password change is also necessary, but the sync logic must be corrected first to prevent the new credentials from being exposed again. ### Analysis of Duplicate Payments and Process Errors A review of duplicate payments revealed systemic issues in the billing and payment process that extend beyond automated data repairs. - **Root Causes Identified:** Duplicate payments were caused by a combination of automated system errors (e.g., auto-mapping of unmapped bills) and human errors, such as incorrectly zeroing out actual invoices instead of mock bills after payment. A specific analysis of one list showed credits on accounts beyond what was initially reported, indicating more widespread problems. - **Client Pressure and Pushback:** Clients like Victor are now pushing back on practices such as creating mock bills and emergency payments. The team acknowledged that while they may agree to stop these practices temporarily, the underlying issues related to not having Electronic Data Interchange (EDI) feeds are a root cause of the discrepancies. ### Standardizing Onboarding and Operational Procedures The need for standardized processes, particularly for client onboarding, was highlighted as a critical step to prevent recurring operational failures. - **Upcoming SOP Meetings:** A two-day meeting is scheduled for January 13th and 14th to establish Standard Operating Procedures (SOPs), with onboarding being the first and most critical focus area. The current lack of clear timelines and consistent processes for onboarding is identified as a primary source of delays and errors downstream. - **Need for a Testing Protocol:** The team discussed the lack of a robust testing protocol for large-scale data fixes before they are executed in the production environment. The consensus was to start using a staging or test environment to validate the outcome of major batch operations, such as past due amount adjustments, before implementing them live to prevent errors affecting client accounts. ### Miscellaneous Administrative Updates Brief updates were provided on internal resource allocation and administrative tasks. - **Resource Re-allocation:** An employee (Benny) is being moved to focus on downloading actual usage data for key, high-priority clients to ensure those tasks are completed. - **Administrative Tasks:** A new laptop and docking station were requested for an employee experiencing hardware issues, and the process for ordering through the internal system was confirmed.
Jay <> Faisal
## Summary ### Holiday Schedules and Resource Availability The meeting opened with a discussion about upcoming holiday schedules and their impact on team availability, particularly for development work in late December and early January. Several national holidays were noted, and it was acknowledged that team members would have varying availability, requiring coordination. The importance of checking schedules with all team members, including those from external partners like Cognizant, was emphasized to ensure project continuity. Key individuals like Ruben, Dan, and Mitchell were identified as potentially available to manage infrastructure needs during this period. ### Strategic Planning for Q1 and H1 The core of the meeting focused on strategic planning for the first quarter and first half of the year for the UVM platform. The primary goal is to identify initiatives that will make a significant impact on bill pay customers and reduce the engineering time spent on repetitive tasks. A key area of focus is reducing the 30-40% of engineering capacity currently consumed by certain manual processes. The discussion aimed to move from high-level goals to a concrete, prioritized list of projects that could be realistically accomplished given time and resource constraints. ### Analyzing and Streamlining Customer Onboarding A significant portion of the conversation was dedicated to dissecting the current customer onboarding process to identify bottlenecks. While initially perceived as a major engineering burden, it was clarified that the bottleneck isn't necessarily engineering itself but depends on the customer type. The main engineering touchpoints were identified as: - **AP File Configuration:** For customers with custom ERP systems (e.g., AppFolio, Oracle), engineering is needed to create customized AP files. Work is already underway to expose configuration to operators, and a future initiative involves consolidating common ERP templates into documented, generic solutions to minimize custom development. - **Custom Reporting:** The need for customer-specific reports (like for MedXL or Simon) requires engineering work due to a lack of a self-service reporting framework. A potential long-term solution discussed is building a flexible, drag-and-drop reporting interface for customers, though this requires technical discovery regarding database limitations. ### Reimagining the Bill Pay and Validation Workflow A deep dive was taken into the bill pay process and its associated data validations. The current system's main challenge is that all validations block payment, causing delays for bill pay customers. A proposed solution is to implement a **two-tier validation system**: - **Tier 1 (Critical Validations):** These would be a subset of validations (e.g., data integrity checks) directly tied to the accuracy of the payment amount and AP file generation. *Only these validations would block a bill from being paid.* - **Tier 2 (Analytical Validations):** This tier would include other data verification and audit checks important for reporting and analytics. Bills could be paid even with these outstanding issues, with operators resolving them retroactively. However, critical implementation challenges were raised, particularly around the **synchronicity between bill pay files and AP files**. Once a bill is paid, most data related to costs cannot be changed retroactively without creating discrepancies. The only retroactive changes deemed feasible are those related to *usage data*, which affect analytics but not financial reconciliation. ### Engineering Resource Planning and Project Pipeline The conversation concluded with immediate concerns about engineering bandwidth for the upcoming January sprint. There is a specific need to identify valuable work for front-end and design resources, as many of the discussed initiatives (data workflows, AP file templates) are backend-heavy. The existing backlog may not provide enough substantive work for the full team. Several potential projects were floated to address this gap: - **API Development:** Exposing certain platform functionalities via API was mentioned as a possible project. - **Historical Build Analysis:** Investigating past data builds was noted as another backlog item. - **Interpreter Layer ("SS" Project):** A related initiative to build an interpreter was briefly mentioned, which aligns with improving the data handshake process between platforms. This project could involve both front-end and back-end work. The agreed-upon next step is to compile all discussed ideas-from high-level strategic goals to specific technical enhancements-into a single document for prioritization and preliminary solutioning in a follow-up meeting.
DSS Updates Review
## **Ticket Updates and Progress** The meeting primarily focused on reviewing the status of several development tickets across the FTG Connect and DSS applications. Significant progress was reported, with multiple features now ready for or undergoing testing. - **Dynamic Button for Carrying Files**: Changes for this feature have been successfully applied and are working on the Deviant test server, ready for verification. - **MPG Setup and Vendor Sorting (DS19)**: Work to sort the vendors list by invoice count in the bills processing tab is complete. This update has been deployed to the development and test environments on FTG Connect. - **Smart Filtering Button (DS29)**: Development for a button to enable or disable filtering for clients and vendors on the uploads page is finished. The feature is currently in the testing phase across various scenarios, with a deployment target for the following day. - **Formatting and Data Cross-Verification**: Work was completed on tickets related to peer DSS formatting and verifying the interconnection between client and vendor data, with statuses updated in Jira. - **FTG Connect Client View Update**: Development has begun on adding a client view to the build processing section. An API call to fetch the client list has been created, and a dropdown to switch between client and vendor views has been integrated. Some debugging on the API is currently underway. ## **Deployment Processes and Environment Access** A detailed discussion clarified the deployment pipeline for the FTG Connect application and highlighted a current access limitation. - **FTG Connect Deployment Pipeline**: The confirmed process involves deploying first to a development environment, then to a "producer" stage, and finally to a preview server on "digi" for testing before production. This clarity helps in planning future releases. - **VPN Access Limitation**: A blocker was identified in accessing the lower test environments (like dev and test producer) due to a mandatory, non-disablable VPN on the corporate laptop. This prevents immediate verification of the deployed changes, requiring an exception or alternative access method to be arranged. ## **Team Coordination and Upcoming Priorities** The conversation outlined immediate next steps and shifted focus toward future project work, accounting for current team availability. - **Production Deployment Responsibility**: Given the access constraints and existing context, it was decided that another team member would handle the deployment of the completed FTG Connect changes to production for the time being. - **Team Availability**: It was noted that a key team member was out sick but available for emergencies, slightly impacting the day's collaboration dynamics. - **Transition to DSS Application Work**: Once the current batch of FTG Connect "pain point" tickets is completed and deployed, the development focus will shift back to the DSS application. The goal is to address usability issues that the team encounters when handling tickets within DSS, which is anticipated to be a substantial next phase of work. ## **Immediate Follow-ups and Additional Tasks** Several specific action items and minor tasks were surfaced to be handled immediately after the meeting. - **Jira Ticket Updates**: All developers confirmed they had updated the status of their respective tickets in Jira, moving them to testing with appropriate comments, which allows for proper tracking and handoff. - **Attachment for Ticket DS45**: An issue was raised regarding a video upload in a Constellation account (ticket DS45). The relevant video file was downloaded during the call to be attached to the corresponding Jira ticket, which should help in diagnosing the problem. - **Future Task Assignment**: While more FTG Connect changes are desired, new tasks for the following day will primarily be assigned from the existing backlog in Jira. Communication for any additional details or issues will be facilitated through Slack for efficiency.
UBM Errors Check-in
## **Billing Account Attributes Update Process** This section covers the core method for updating customer login credentials within the system. Updates are managed through a bulk file upload process. A specific file, often received from users like Rachel Ramp and others, is used to update the `login` and `login password` attributes associated with billing accounts. The system has a dedicated bulk upload functionality within the data admin section. The process uses a file where each row contains a unique **billing account ID**; the system then updates only the attributes for the accounts listed in that file. After an upload, it is standard practice to manually refresh the connected Power BI report data to verify the changes have been processed correctly, even though the report is set to update automatically twice daily. ## **Automation and System Syncing Discussions** The conversation explored opportunities and challenges related to automating the update flow and maintaining data consistency across different platforms. There is a strong interest in automating the manual file-based process, especially after a clarifying discussion with a colleague named Ray, which indicated the automation could be straightforward. A significant concern was raised about potential data silos and lack of synchronization. Specifically, there is a possibility that Customer Success Managers (CSMs) might be updating credentials directly in a **Smartsheet**, without those changes being reflected in the primary system (UBM) or communicated to the operations or data services teams. This creates a risk of discrepancies and errors being discovered too late. A key next step is to investigate and establish a unified process for handling credential updates across all touchpoints. ## **OPS FBAI Email Group Function** The purpose and current use of a specific email distribution list within the system's ecosystem were clarified. The `OPS FBAI` email group has a very limited role within the primary application. Its main function is to serve as a recipient for automated system notifications. These notifications are primarily related to payment processing errors, such as when a bill is excluded from a payment file. The group essentially acts as an alert mechanism for the operations team regarding payment issues. ## **Short-Term Fixes and Operational Priorities** Updates were given on immediate tasks and pressing operational concerns that require attention. Files related to short-term fixes were prepared for distribution to the relevant team. An urgent operational concern was highlighted regarding **unmapped virtual accounts**, which is considered a high-priority issue that needs resolution. The team is awaiting the return of a colleague to help address the backlog of fixes and provide some breathing room for the team. ## **Meeting Logistics and Scheduling** A minor adjustment was made to the schedule for an upcoming discussion. A meeting concerning **funding analysis** was rescheduled to start a half-hour earlier than originally planned to accommodate participants' availability.
[EXTERNAL]HEXPOL Constellation URA Data Updates
## Customer The customer operates within a sector that requires precise tracking and reporting of utility consumption, specifically for commodities like natural gas, propane, and water. Their background involves managing a multi-site portfolio where accurate data is critical for internal reporting, compliance, and likely emissions auditing. The discussion reveals a deep operational need for reliable data to support environmental, social, and governance (ESG) or similar regulatory narratives for senior leadership. ## Success The primary success identified was the proactive and thorough identification of data discrepancies through a detailed audit exercise. The process successfully pinpointed specific inaccuracies across several commodities, transforming what was previously assumed to be correct data into a far more accurate and auditable dataset. This exercise is considered a necessary and valuable step, ultimately leading to a higher degree of confidence in the reported figures. The collaborative effort resolved several issues, including incorrect manual entries and misinterpretations of billing structures, thereby strengthening the integrity of the foundational data. ## Challenge The most significant and recurring challenge centered on **propane data management**. The core issue was the lack of a standardized conversion method across multiple vendors. Each propane supplier uses different definitions for cylinder sizes and weights, causing major inconsistencies when the data was captured using a single, uniform conversion logic. This vendor-specific variability directly impacted the accuracy of consumption reports. A secondary, though less pervasive, challenge involved reconciling historical bills where usage data was missing, requiring manual retrieval or estimation to complete the dataset. ## Goals The customer's goals, as reflected in the conversation, are clear and focused on data integrity and reliable reporting: - To achieve and maintain **100% data accuracy** across all utility commodities (natural gas, propane, water) for all operational sites. - To establish a **robust and auditable data foundation** that can withstand internal and external scrutiny, particularly for emissions reporting. - To **eliminate future variances** in reported figures by implementing lasting fixes, such as vendor-specific conversion logic for propane. - To enable clear and confident **communication with senior leadership** by having a verified, accurate dataset and a clear narrative for any historical corrections.
Constellation bills
## Summary The meeting primarily focused on resolving critical operational and security issues related to the procurement and handling of Constellation energy invoices. ### Constellation Invoice Automation Breakdown The core issue is the failure of an automated system for dropping Constellation bills into the company's workflow. A system that once functioned automatically for certain electric accounts has stopped working, leading to manual intervention and delays in processing invoices for customers like Park Ohio. The root cause of why the automation stopped and which specific accounts were ever covered remains unclear, highlighting a significant knowledge gap within the team. The failure of this automation is causing ongoing problems, requiring manual checks and creating a risk of late payments for clients. ### Customer Tagging and System Confusion A related problem involves a legacy "tagging" system for Constellation customers, which was believed to facilitate the automated process. This system appears to be poorly documented and misunderstood. Key personnel who were previously involved are no longer available or cannot recall how the tagging functioned, and the system's logic may have been broken by corporate actions like customer name changes (e.g., Park Ohio). The team is left with no clear understanding of how to identify which accounts should be automatically processed versus which require manual pulling. ### Critical Security Incident and Credential Management A severe security incident was disclosed where internal login credentials for a master account containing *all* Constellation customer data were emailed to an external party. This account must be immediately secured. The priority is two-fold: first, to ensure these high-risk credentials are completely removed from any system where they could be synchronized or exposed (specifically UBM), and second, to change the account password. The incident elevates the issue from a potential internal vulnerability to an active data breach risk with serious legal and regulatory implications if the credentials are misused. ### Process Gaps for New Customer Accounts Beyond fixing the broken automation for existing accounts, the team identified a major procedural gap: there is no established process to enroll *new* Constellation customer accounts for automatic invoice downloading. This gap means that as the company onboards new clients, their invoices are likely to fall through the cracks from the start, perpetuating the manual work problem. The lack of a handoff process to the operations team (Galo) for new accounts is a systemic failure that needs to be addressed to prevent future backlogs. ### Temporary Manual Solutions and Next Steps To mitigate the immediate business impact, especially for the at-risk Park Ohio account, the team proposed a temporary manual workaround. This involves having the BDE team manually pull Constellation invoices on a daily basis to prevent late payments. However, this is explicitly viewed as a stopgap measure. The long-term solution requires coordinated action: consulting with Mary upon her return for insights from a previous operations meeting, determining the feasibility of restarting or replacing the old automated system, and establishing a robust, documented process for both existing and new customer accounts to eliminate manual dependency and secure sensitive data.
Weekly Priority Meeting
## Summary The meeting centered on evaluating two core strategic options for a database migration project, with a particular focus on security exceptions, customer impact, and the required timeline for implementation. ### Core Dilemma: Database Placement and Access Model A fundamental decision point was identified between two paths for the ongoing migration. The first option involves proceeding with the current plan to place the database in the "red" environment while carrying forward security exceptions for direct customer access. The second, and more complex option, entails removing those exceptions by moving the database to the more secure "green" environment and implementing an API layer for all customer data access, which would necessitate changes on the customer side as well. ### Evaluating the "Green" Option with API Implementation The primary uncertainty requiring assessment was the effort and timeline impact of shifting to a full API-based model. It was clarified that simply moving the database to green is technically straightforward, but the significant work lies in building the necessary APIs to replace direct database access. This shift would enable external customers to run their custom reports and access filtered data through these APIs instead of directly querying database views. ### Security and Timeline Implications Maintaining the current plan with security exceptions raised concerns, particularly with leadership, as an August timeline that still carried these exceptions was seen as surprising and potentially unacceptable. The discussion highlighted the need to fully articulate the risks and trade-offs of both options to secure a definitive strategic direction from senior leadership. ### Parallel Development and Path Dependency An intermediate suggestion was made to allow certain development tasks to proceed in parallel within the red environment, as they are not data-dependent. This approach was proposed to maintain momentum while the broader strategic decision is being resolved, suggesting that some work could continue regardless of the final chosen path for database access. ### Required Leadership Decision and Communication A strong consensus emerged that the team must prepare a clear articulation of both options-including the remediation activities, effort, and timeline implications of removing the database exposure exception-for senior leadership. The decision was deemed significant enough to require explicit direction from key leaders before the project can move forward confidently, framing it as a key strategic decision for the program.
Daily Priorities
## **Summary** The meeting centered on critical operational challenges within a bill payment or invoice processing system, primarily focusing on persistent data quality issues, customer-specific problems, and overwhelming workload management. The conversation revealed a system under significant strain, with urgent fixes needed to prevent customer escalations and ensure accurate financial reporting. Key pain points included fundamentally broken data for certain utilities, an epidemic of duplicate invoices, and specific high-risk accounts requiring immediate intervention. ### **Critical Data Integrity Failures: Propane Billing** The system's handling of propane (erroneously referred to as natural gas) billing is fundamentally broken, corrupting all related reporting. This failure stems from inconsistent vendor measurements and flawed unit conversions. - **Core Breakdown:** The process for capturing and converting propane usage data is defective. Vendors use different definitions for measurement units (e.g., "cylinder"), and the system's single, standardized conversion to gallons is incorrect, making the data unusable for analysis. - **Impact on Reporting:** This error is not minor; it "messes up the entire reporting" for these bills. The analysis generated from this flawed data is deemed not useful for customers, indicating a severe quality issue that undermines the service's value proposition. ### **Epidemic of Duplicate Invoices** A significant portion of daily operations is consumed by identifying and deleting duplicate bills, which is rampant and worsening. - **Volume and Impact:** Estimates suggest at least 10% of daily invoices are duplicates, with thousands of bills being deleted over a three-month period. This creates massive inefficiency, as team members spend considerable time manually identifying and removing these duplicates instead of performing value-added work. - **Root Causes:** Several factors contribute to this problem. A primary issue is incorrect account number matching during invoice ingestion, which prevents the system from automatically flagging duplicates. Other sources include receiving both paper and electronic (portal) copies for the same service, and potential issues with download processes or customers emailing invoices redundantly. - **System Limitations:** Current duplicate detection logic is insufficient. It relies on exact matches of vendor, customer, amount, and service dates, or pixel-perfect PDF comparison, which fails for scanned versus downloaded documents or bills where the invoice date changes upon download. ### **High-Risk Customer Accounts and Escalations** Specific customers, notably Jacob and Victra, are at a crisis point, with systemic neglect threatening major business relationships. - **Jacob Account in Peril:** The Jacob account is described as a looming "nightmare," receiving no dedicated attention. Their services are being "shut off daily," and the account is a priority due to the imminent risk of a severe customer escalation, similar to past issues with Victra. - **Victra's Ongoing Demands:** Victra remains a "never ending story," consuming enormous resources. The team is bracing for further complaints despite being current on payments, as past issues like emergency payments are now being questioned. A recent tense meeting highlighted customer frustration with internal presentation materials, indicating strained relations. - **Selective Intervention Strategy:** A strategy of isolating certain complaining customers (like Sheets) from automated fixing scripts is in place to avoid constant back-and-forth, while keeping a close manual watch on others known to be vocal (like Royal Farms). ### **Team Workload and Process Bottlenecks** The team is operating under extreme pressure, manually triaging a massive backlog of errors with insufficient resources. - **Manual Triage Overload:** The core activity involves reviewing hundreds of invoices line-by-line to correct errors, a process described as utterly blocking progress on system improvements. The sheer volume, such as 200+ new items daily for one customer (Extension), overwhelms capacity. - **Contractor Allocation:** Contractors have been strategically reassigned to focus solely on the high-volume Extension list to make a dent in the backlog, though this pulls them from other critical accounts. - **Internal and External Pressure:** The team faces "screaming" from customers and internal scrutiny regarding processing times and accrual reports, especially near the end of the fiscal year. There is a palpable fear of metrics being pulled to audit individual productivity. ### **Systemic Limitations and Uncontrollable Factors** The discussion acknowledged some issues that are inherent to vendor systems and likely unsolvable. - **Unfixable Vendor Quirks:** Certain problems, like some utility portals generating a new invoice date equal to the download date (while the due date remains static), are deemed uncontrollable system issues that the team must simply accept and work around. - **Portal Integration Gaps:** The inability to reliably turn off paper bills when a portal feed is active leads to duplicate ingestion. A known workaround (using a special placeholder number for deleted bills) is likely not being implemented for new customers, exacerbating the duplicate problem. - **Data Entry Dilemmas:** The process is hampered by complex, one-off scenarios that require manual intervention and judgment calls, such as invoices appearing under incorrect vendors or decisions on whether to process older, superseded bills.
Bill Templates Review
## Summary The meeting was a detailed technical review of multiple utility billing data issues, focusing on discrepancies in consumption reporting, account mappings, and unit conversions. The primary goal was to audit and understand the root causes of errors flagged in various utility accounts, ranging from natural gas and water to propane and electricity. ### Duplicate Accounts and Data Mapping Issues Several accounts were identified with problematic data structures that led to reporting inaccuracies. A recurring issue was duplicate account entries and incorrect mappings between virtual accounts and physical locations, which caused consumption data to be split or misattributed. - **Duplicate Entries and Unlinked Bills:** For one natural gas account, bills prior to a specific date were not correctly linked, creating a mapping issue. While this did not immediately cause missing usage data (as consumption across virtual accounts still summed to the correct location total), it was noted as a problem requiring a fix to ensure accurate per-account reporting. - **Split Build Blocks Overlapping:** In some cases, a single virtual account contained multiple "build blocks" with overlapping service periods for the same type of charge (e.g., supply charges). This resulted in the system capturing the same data in two separate blocks, which could appear as a discrepancy to end-users reviewing the raw data, even though the total consumption for the location remained correct after aggregation. ### Incorrect Service Dates and Billing Periods A number of bills were found to have service dates that were clearly erroneous, directly impacting the accuracy of period-based reporting. - **Invalid Date Ranges:** Examples were reviewed where the service period on a bill was listed as spanning two months, which is impossible for a standard monthly bill. This was identified as a data ingestion error where the start date was incorrect. The fix required manually updating the start date within each affected build block to reflect the accurate, shorter billing cycle. ### Challenges with Propane Bill Conversions A significant portion of the discussion centered on the complexities of handling propane deliveries from different vendors, which use varied units of measurement that the current system cannot process accurately. - **Multiple Vendor Unit Definitions:** The core issue is that three different propane vendors each use distinct definitions for their delivery units (e.g., cylinders, pounds). The current system lacks the ability to apply multiple conversion rates for the same utility type (propane) to standardize these deliveries into a common unit like gallons. - **Proposed System Change:** A practical solution was proposed: adding support for "pounds" as a standardizable unit within the system. This change would require a software release but would allow for a single, vendor-specific conversion rate from pounds to gallons, simplifying the process compared to managing multiple gallon-conversion rules. This was highlighted as a necessary fix, though it requires coordination to implement. ### Discrepancies in Historical Bill Data The review uncovered several anomalies in historical utility data that had been manually generated or imported from client files. - **Imported Data Inconsistencies:** For certain accounts, historical bills were created by the data services team based on files received from the client (e.g., Hexpol). These bills sometimes contained oddities, such as energy charges without corresponding consumption data or usage values that appeared "way off." It was clarified that these were known historical entries and not representative of ongoing, current billing issues. - **Questions on "Converted" Bills:** Confusion arose around the term "converted bill" in one audit note. It was determined this likely referred to these historical bills and not to any real-time unit conversion process within the system, as no such conversion was being applied to the bills in question. ### Electricity Consumption and Charge Calculations The audit also covered specific electric bills to verify that consumption totals and peak/off-peak calculations were being captured correctly. - **Total Consumption Verification:** For one electricity account, the total consumption for the month was found by summing the on-peak, mid-peak, and off-peak usage values from the distribution section of the bill. The system's calculated total was confirmed to be accurate, even though the method of calculation (involving subtraction in some cases) was complex and not immediately intuitive from the bill's layout. - **Charge Accuracy:** In another instance, a complaint about "charges being off" was investigated. The discrepancy was traced to how the system presents data: certain line items like "past due amounts" are recorded at the overall bill level, not within individual build blocks. Therefore, while the total amount due was correct, a breakdown at the build block level would not include these items, creating a perceived mismatch. ### General Data Quality and Reporting Impacts Throughout the review, broader themes emerged regarding how data quirks affect end-user reports and the importance of correct data structuring. - **Impact of Duplicate Bills:** The handling of true duplicate bills (e.g., a re-issued bill with late fees) was discussed. The system flags these with a duplicate bill error, preventing the second instance from being processed into the monthly usage report, which is the correct behavior to avoid double-counting consumption. - **Understanding "Total Bulk" and Other Units:** Several bills contained ambiguous units of measure like "Total Bulk" or "EA" (likely "each"), which were not clearly defined and posed a challenge for accurate categorization and conversion. These were marked as items requiring domain expertise to clarify. - **Correct Values Despite Split Data:** A key takeaway was that many flagged issues-particularly multiple build blocks on a single account-do not negatively impact the final aggregated consumption totals reported to the client. The values are often correct; the irregularity lies in the underlying data structure.
Bill Error Validation
## Propane Billing and Unit Conversion Issues A critical discrepancy was identified in how propane consumption is captured and calculated. The system currently records usage in "cylinders," but the underlying bills show the actual measurement in pounds. The core problem is that the application does not support pounds as a unit of measure for propane, only gallons, terms, and cylinders. - **Unsupported Unit of Measure:** Bills showing propane usage in pounds trigger an "unsupported unit" error because the system lacks a built-in conversion from pounds to gallons. - **Inconsistent Cylinder Definitions:** The existing conversion rate in the system assumes one cylinder equals approximately 4.014 gallons. However, this is problematic because cylinder sizes (e.g., 32 lbs vs. 33.5 lbs) vary between vendors, making a single conversion rate inaccurate. - **Significant Calculation Variance:** Using the system's cylinder-to-gallon conversion yields a calculated usage of roughly 16 gallons for a sample bill. In contrast, a direct pounds-to-gallons conversion (using a rate of 0.236 gallons per pound of propane) results in approximately 30 gallons-nearly double the amount. This major discrepancy must be resolved for audit accuracy. ## Natural Gas Bill Analysis and MCF Conversions Several natural gas bills were reviewed where auditors flagged potential issues, primarily stemming from unit of measure conversions that are performed automatically by the system. - **Automatic Conversion to Therms:** The system's default unit for natural gas is **therms**. When bills are captured in **MCF (thousand cubic feet)**, the system automatically converts the usage using a standard factor (e.g., 10.37 therm/MCF). This converted value is what appears in reports. - **Auditor Confusion:** The flagged "issues" often were not errors but misunderstandings. Auditors comparing the raw MCF value on the bill to the therm value in the system reports saw a mismatch, not realizing the conversion had taken place. Manual checks confirmed the underlying calculations were correct. - **Bill Structure Verification:** For bills containing both supply and distribution charges, it was confirmed that the consumption (usage) should typically match between the two segments, as the same volume of gas is being supplied and then delivered. This was validated in the reviewed bills. ## Bill Structure and Data Capture Problems The review uncovered inconsistencies in how complex bills are structured within the system, leading to potential reporting inaccuracies. - **Mislabeled Commodity Types:** A bill clearly containing water usage data was incorrectly labeled as "Lightning" in the system, based on a meter ID or source file error. This misclassification would cause the consumption and charges to appear in the wrong utility reports. - **Incorrect Build Block Types:** One natural gas bill was set as "distribution only" but contained a "supply" observation type within it. This bill should likely be classified as "full service" to accurately reflect its contents. - **Multiple Build Blocks on Single Bills:** Some bills were captured with multiple build blocks (e.g., Water, Stormwater, Cash Flow) as received from the source data feed. While all charges are captured, consumption metrics are only tied to the relevant commodity (e.g., Water), meaning stormwater charges would roll up into general water charge reports without separation. ## Data Inconsistencies and Review Process Underlying data inconsistencies and the approach to the audit review were discussed. - **Inconsistent Data Sources:** For some natural gas bills, different values from the same PDF (e.g., a "MCF Used" field vs. a "Monthly Usage" field) were captured in different months, leading to an inconsistent data series. It was unclear which value was the correct one to use. - **High Volume of Similar Issues:** The propane cylinder conversion issue and the natural gas MCF/therm clarification are not isolated incidents but affect a large number of bills for the client. - **Focus on False Positives:** The strategy for the audit review shifted toward identifying "false positives"-bills flagged by auditors that are, in fact, correct based on the system's designed conversions and capture logic-rather than attempting to re-engineer every bill. ## Path Forward for System Adjustments Potential solutions to the most pressing issue, the propane unit conversion, were explored. - **Proposed Hotfix:** A short-term solution was proposed to add **pounds** as a supported unit of measure for propane, along with a standardized pounds-to-gallons conversion factor. This would allow all existing bills captured in pounds to be repaired and calculated correctly without the unsupported unit error. - **Challenge of Accurate Conversion:** Implementing this fix is contingent on determining a reliable, standardized conversion factor from pounds to gallons for propane, acknowledging that this conversion can vary slightly based on temperature and pressure. - **Limitation of Current Logic:** The system cannot currently accommodate customer-specific or vendor-specific conversion rates (e.g., for different cylinder sizes). A universal rate must be applied, which is a known limitation affecting precision.
Review Problem Bills
## Summary ### Urgent Audit Preparation and Data Crisis The meeting centered on an urgent and critical data quality crisis for a customer facing an external audit by Ernst & Young. The audit, scheduled for the coming Monday, will scrutinize emissions data for fiscal years 2024 and 2025. The core problem is that the customer's utility bill data in the system is a "mess," with widespread inaccuracies in usage captures and bill classifications that threaten the audit's success and could have severe professional repercussions. - **Critical Timeline and Stakes:** The team is under immense pressure to rectify data before the audit. The customer is manually reviewing every single bill to identify errors, and the platform team must correct them. There is a palpable fear that the scale of inaccuracies could lead to significant consequences, including job losses for individuals involved. - **Root of the Problem:** The data issues are systemic and not new. They stem from historical problems with data ingestion and ongoing manual setup errors. The team acknowledges they have "no one else to blame" for the current state, as the platform is now fully responsible for the data's accuracy. This situation is seen as a potential risk for any customer undergoing a similar audit. ### Systemic Data and Process Issues The discussion revealed deep-seated problems with how utility data is captured, classified, and reported within the platform. The issues prevent accurate emissions reporting and undermine customer trust. - **Incorrect Usage and Classification:** A primary issue is the misclassification of vendor types (e.g., Full Service vs. Distribution/Supply) and incorrect assignment of observation types (service descriptions) for line items like "use" or "use cost." If these are wrong, usage data does not flow correctly into calculated values and, consequently, into the final audit reports. - **Platform Reporting Gaps:** There is a concerning disconnect between what the data processing system (DSS) captures and what is visible and usable in the front-end platform (UBM). The team lacks a straightforward way to map a bill's internal ID to its corresponding record in the data processing system, making holistic analysis and troubleshooting difficult. - **Lack of Proactive Monitoring:** The team admits to having no effective way to gauge the severity or scope of the problem proactively. They are consistently surprised by the magnitude of inaccuracies when a crisis like an audit arises, indicating a lack of ongoing data quality assurance. ### Strategy for Manual Review and Validation Given the time constraints and systemic issues, the team formulated a strategy for a massive manual review of bills to identify and correct errors before the audit. - **Prioritization by Impact:** The plan is to focus on the customer's 18 locations, prioritizing those with the highest utility usage volumes. The goal is to create a "heat map" of the most critical sites and months to maximize the impact of limited review time. - **Manual Verification Process:** The review involves manually checking each bill's PDF against its digital record in the platform. The reviewer must validate two key elements: the correct vendor type (based on utility and deregulation rules) and the accurate capture of usage, ensuring it is not double-counted due to incorrect observation types. - **Comparative Analysis:** A parallel approach involves extracting all usage data from the data processing system (DSS) to create a pivot table showing total usage by utility type and site. This high-level view can be compared against the platform's numbers and the raw bills to quickly identify major outliers and discrepancies. ### Key Learnings from Expert Training A significant portion of the meeting was dedicated to a training session with an operations expert to understand the rules for correctly validating and classifying bills, which are riddled with exceptions. - **Vendor Type Rules:** The determination of "Full Service" vs. "Distribution/Supply" depends on the utility type and state deregulation status. Generally, electricity and natural gas in deregulated states are split, while propane, water, and sewer are almost always Full Service. Bills from landlords or for reimbursements often complicate these rules and may require pragmatic exceptions. - **Avoiding Double-Counting Usage:** A critical lesson was identifying "counted usages." Line items labeled **"use"** or **"use cost"** are counted in totals. If a bill has both a total "use" line and detailed line items also labeled as "use," the usage will be double-counted. The correct approach is to capture the total usage only once, with other line items marked as non-counted variants like "adjustment use cost." - **Unit of Measure Challenges:** Correctly interpreting the unit of measure on bills is a common hurdle, with examples like propane billed in "cylinders" requiring conversion to a standard weight (pounds) based on vendor-specific information. - **Build Block Structure:** Bills should not have separate build blocks for singular charges like late fees or taxes. These should be consolidated into the main build block for the relevant commodity (e.g., gas tax added to the gas build block). Creating extra blocks causes overpayment and reporting inaccuracies. ### Next Steps and Long-Term Concerns The meeting concluded with a plan for immediate action and reflection on the broader, unsustainable nature of the current process. - **Immediate Action Plan:** The immediate focus is on executing the manual review through the night and next day, tackling as many high-priority bills as possible. A follow-up meeting with the customer is scheduled to discuss findings and the path forward for the audit. - **Acknowledging Incomplete Solutions:** The team is pessimistic about having a complete, pristine data set by the audit deadline. The goal is to gain a better understanding of the problem's scope and provide a defensible, if imperfect, data story to the auditors. - **Recurring Problem with No Systemic Fix:** A major concern is the lack of a long-term, scalable solution. The current approach of manual review and correction is not sustainable. Every customer undergoing an audit is likely to face similar issues, revealing a fundamental weakness in the platform's data integrity and validation processes. The team expressed frustration at the repeated cycle of discovering massive data problems only during high-pressure crises.
[EXTERNAL]FW: [EXTERNAL]Updated invitation: Constellation + Kong - Technical Deep Dive @ Thu Dec 4, 2025 2pm - 3pm (CST) (sanwar.sunny@constellation.com)
## Prospect The prospect represents a traditional energy company, Constellation, which is embarking on a significant digital transformation initiative. While their core business is power and gas, they are now building internal software capabilities and are in the process of evaluating and consolidating their technology stack. The individual involved is focused on establishing a centralized, standardized API management platform to govern and synchronize multiple disparate applications currently developed in silos. Their background indicates a need to move from a fragmented, ad-hoc software development model to a governed, enterprise-ready platform that can support both current API needs and future AI-driven use cases. ## Company Constellation is a power and gas company with historically limited in-house software development experience. The company is currently managing three separate applications hosted across different cloud providers: Google Cloud Platform, Microsoft Azure, and Amazon Web Services. Each application has its own independent development team, roadmap, and deployment environment, leading to a lack of standardization and governance. To address this, the company is actively developing "Navigator," a strategic, centralized platform aimed at unifying API management, security, and developer workflows across the entire organization. This initiative is part of a broader effort to mature their software engineering practices over the next few years. ## Priorities The company's key priorities for the API management platform are comprehensive and driven by both immediate consolidation needs and future strategic goals: - **Centralized Governance & Standardization:** A primary objective is to bring the three siloed applications under a single management plane. This includes establishing consistent security policies, access controls, and deployment workflows across all teams and cloud environments. - **Security and Compliance:** Implementing robust authentication and authorization is critical. This includes support for various identity providers (IdPs), role-based access control (RBAC) for both internal developers and external customers, and meeting compliance requirements such as SOC 2 and NIST standards. - **Full Lifecycle API Management:** The platform must support the entire API lifecycle-from design and development to publishing, versioning, and retirement. This necessitates a developer portal for API discovery and documentation, both for internal teams and external consumers. - **Advanced Traffic Management and Reliability:** Requirements include sophisticated rate limiting, throttling, and service protection mechanisms to safeguard backend services from abuse and ensure reliability for critical consumers. Support for various load-balancing algorithms is also noted. - **Observability and Analytics:** Gaining deep visibility into API usage, performance, and errors is essential. This involves built-in analytics dashboards, the ability to create custom reports, and integration capabilities with external monitoring tools like Datadog or Splunk. - **Monetization Capabilities:** There is a clear need to productize APIs and enable monetization. This includes defining API products with feature-based plans, metering usage, and integrating with billing systems like Stripe to handle invoicing and overage charges. - **Support for AI and Agentic Workflows:** Forward-looking priorities include preparing for AI integration, specifically managing traffic for large language models (LLMs) and supporting the Model Context Protocol (MCP). This encompasses capabilities like semantic caching to control costs and providing governance for AI agent interactions. - **Operational Flexibility and Performance:** The solution must offer deployment flexibility, including hybrid models where the data planes reside within the company's own cloud VPCs for security. It must also deliver high performance, scalability, and low latency, capable of handling significant transaction volumes. - **Accelerated Timeline:** There is an ambitious goal to have at least two of the three applications running on the new platform and be production-ready by the first quarter of the next calendar year.
[EXTERNAL]Constellation HEXPOL URA 2025 Data Review
## Summary ### The Urgent Situation and High-Stakes Audit A critical data accuracy crisis has emerged, requiring immediate triage to prepare for a high-stakes, third-party audit by Ernst & Young scheduled for the upcoming Monday. The audit covers all 2025 data for a wide range of commodities, including natural gas and propane, encompassing usage, cost, and emissions reporting. The gravity of the situation is severe, as inaccurate data could lead to significant professional, legal, and financial repercussions, including job loss and lawsuits. The entire team has canceled other meetings to dedicate full focus to resolving these discrepancies before the deadline. ### Discovery of Widespread Data Discrepancies An intensive manual review of 2025 invoices has uncovered widespread and inconsistent inaccuracies across multiple commodity types, far exceeding initial expectations. A dedicated team has been comparing every invoice against the system data to flag mismatches in usage or cost. - **Propane Data is Systemically Incorrect:** The propane data is particularly problematic, where most entries are showing only half of the actual usage. This stems from an incorrect or inconsistently applied conversion formula for 33-pound cylinders (7.78 gallons per cylinder), affecting the accuracy of both usage and downstream emissions calculations. - **Errors are Random and Inconsistent:** The inaccuracies are not uniform; they appear randomly across accounts and invoices, making them difficult to predict and correct without a line-by-line review. - **Scope Extends Beyond Initial Findings:** While initial concerns were around natural gas, the review has expanded to reveal issues with "propane and other areas," indicating a broader data integrity problem. ### Identification of Root Causes The investigation points to two primary, concurrent root causes for the erroneous data, one related to system configuration and the other to data entry processes. - **Incorrect Account Setup:** A significant issue originated in April when certain service locations were configured with the wrong "service type" during their initial setup. This single configuration error has caused all subsequent data for those locations to be reported incorrectly for the entire year, but fixing the root setup will propagate corrections forward. - **Faulty Data Entry and AI Processing:** Inaccuracies have also been introduced through manual data entry errors. Furthermore, the Artificial Intelligence (AI) tool used to extract data from invoice PDFs has been pulling incorrect fields inconsistently, sometimes interpreting the same bill differently on separate occasions, which compromises reliability. ### Triage Strategy and Collaborative Effort The team is executing a coordinated, short-term fix to cleanse the 2025 data for the audit, while acknowledging the need for long-term system improvements. - **Prioritizing a Practical, Rapid Fix:** The immediate goal is to produce accurate *annual* totals for 2025 to satisfy the auditor. A pragmatic approach is to correct a representative month, establish a correction "delta," and apply it to the year, rather than perfectly rectifying every single monthly entry under extreme time pressure. - **Division of Labor for Efficiency:** Work is divided between teams: one group is exhaustively auditing invoices to create a definitive list of erroneous Bill IDs, while the technical team uses that list to implement fixes in the system. The intent is to front-load corrections, with completed audit lists being handed off for repair as soon as possible during the day. - **Verification and Communication Plan:** A short checkpoint meeting is tentatively scheduled for 2:30 PM Eastern the following day. The preference is to cancel this meeting via email confirmation once all fixes are verified, signaling that the data is ready for the final audit report compilation. ### Path to Resolution and Future Safeguards The plan outlines clear steps for immediate remediation and introduces controls to prevent recurrence, addressing concerns about being an untested environment for new technology. - **Immediate Correction Protocol:** Corrections will be made directly to individual bills based on the audit list. System configuration errors for specific accounts will be corrected at the source, which will automatically rectify all associated data. The fixes are reported to reflect in the system in real-time, without requiring an overnight refresh. - **Implementing Long-Term Data Integrity Measures:** For the future, two key safeguards are planned: the implementation of dual data validation checks to replace manual entry, and an enhanced AI review layer that flags discrepancies between the OCR output and the manually entered data for human verification. - **Demand for Mature Technology Deployment:** A strong expectation was set that new features, particularly AI-driven data extraction, must undergo rigorous quality assurance and stability testing before being deployed in a live, production environment used for critical financial and compliance reporting. There is a firm refusal to continue as a testing ground for experimental technology.
Simon Properties Onboarding Plan - Internal *** High Importance ***
## Summary The meeting centered on planning and executing a large-scale utility account migration project for the client Simon, involving 8,823 accounts. The team is under significant time pressure, with a target Go-Live date of March 31, but faces major hurdles due to delayed project initiation and dependencies on third-party notifications. ### Project Scope and Current Status **The project involves migrating a massive number of accounts with a compressed timeline, creating an immediate need for detailed planning and resource allocation.** The total account list provided by Simon is 8,823, which is higher than the initially expected ~7,800. - **EDI vs. Non-EDI Breakdown:** Within the total, 2,457 accounts were originally flagged as potentially using EDI (Electronic Data Interchange). Initial processing has isolated 848 confirmed EDI accounts that lack standard invoice image links. - **Work Completed:** A script has been run on the remaining 7,412 accounts to separate non-EDI bills for processing. So far, 10 test bills have been successfully mapped and processed in the platform as a proof of concept. - **Timeline Pressure:** The contract was signed in July, but substantive work did not begin until November, putting the project behind schedule from the outset. ### Critical Path: Portal Setup and NG Notification **The single biggest constraint on the project timeline is the inability to begin setting up vendor portals until formal notice is given to the incumbent provider, NG.** - **Notice Period Uncertainty:** The team must clarify the contractual notice period with Simon's leadership, as there is confusion over whether it is 30, 60, or 90 days. A longer period is critical for feasibility. - **Portal Setup Volume:** The team must complete portal setups and change of address (COA) processes for all 8,823 accounts. The work cannot begin in earnest until NG is notified, which will severely compress the active execution window leading up to the March 31 Go-Live. - **EDI Account Strategy:** A subset of ~300 accounts are with Constellation and can be accessed via their CDP portal independently, which is a minor relief. However, all other EDI accounts will require portal setup and manual bill downloads once credentials are available. ### Resource Planning and Process Refinement **Given the scale of the project, there is a pressing need to optimize resources and learn from past implementation challenges to avoid repeating mistakes.** - **Resource Allocation:** Currently, only one team member (Alberto) is manually processing PDFs for data extraction, mapping, and UBM setup. A formal plan for additional resources from late January through March is required. - **Vendor Requirement Database:** A major inefficiency identified from past projects is the lack of a centralized database detailing specific setup requirements for each utility vendor (e.g., unique LOA forms, notarization needs, required data fields like Tax ID or last bill amount). This leads to a reactive, unprofessional appearance with clients. - **Immediate Action:** The team will hold a working session to populate a detailed spreadsheet for the top vendors, documenting known requirements (utility vs. client LOA, needed documents, etc.) based on lessons learned from previous clients like Ascension and Vector. ### Data Processing and System Configuration **Efficient data handling and upfront configuration are essential to avoid costly rework later in the project lifecycle.** - **Upfront Attribute Gathering:** It was emphasized that gathering all location attributes and General Ledger (GL) splits *before* mapping and processing bills is far more efficient. Doing this work later forces the team to touch each account multiple times. - **AP File Format:** The accounts payable (AP) file format for Simon will be a custom configuration, not a standard template. - **GL Strategy:** The plan is to mirror the GL allocation currently used in NG's system to simplify the transition. ### Next Steps and Client Communication **The immediate focus is on internal alignment to build a credible execution plan before Simon can proceed with notifying NG.** - **Internal Working Session:** A dedicated call will be scheduled for the operations team to detail vendor requirements for the top utilities, forming the basis of a realistic project plan. - **Client Timeline Review:** The visual timeline shared with Simon shows key milestones, with data extraction and portal setup as the longest phases. The next critical step is aligning internally and then informing Simon when they are comfortable to issue the notice to NG, which will trigger the portal setup phase. - **Upcoming Discussion:** A meeting with Simon's AP team is scheduled for the following Tuesday to review the test bills and discuss the implementation further.
Bill Template Review
## Summary ### Issues with the Current Build System and the Concept of a Build Template The meeting centered on identifying and addressing fundamental flaws in the current legacy system's approach to handling "builds." The core problem is the absence of a properly utilized "build template," which is leading to multiple systemic issues. The current method is causing inaccuracies and inefficiencies in data capture and processing. - **Lack of Setup and Standardization**: The system lacks a mechanism to correctly set up builds within DSS (Data Submission System), which is a critical gap that needs to be addressed. - **Incorrect Logic Application**: Presently, specific logic (e.g., for determining observation types or usage charges) is being hard-coded or applied in an ad-hoc manner. This logic often already exists within the build template for a given account, so re-implementing it is redundant and error-prone. - **Inconsistent Data Entry**: In DSS, fields that should be constrained (like usage types) are currently freeform, leading to incorrect data entry and making it difficult for operators to correct errors. This creates a frustrating and inefficient audit experience. ### The Proposed Solution: Integrating the Build Template into DSS The primary solution discussed is to systematically incorporate the existing build template into the DSS workflow. This integration would act as a crucial safeguard, ensuring data integrity and streamlining operations by leveraging predefined rules. - **Template as a Single Source of Truth**: The build template, which is tied to a unique combination of customer, vendor, and account number, should dictate what data fields are applicable and what logic should be followed during data capture in DSS. This would eliminate guesswork and hard-coded rules. - **Cross-Platform Consistency**: While the immediate focus is DSS, the build template is envisioned as a shared resource that should be consistent across DDS, DSS, and UVM systems to maintain data coherence throughout the platform. - **Future-Proofing for Setup Builds**: Incorporating the template is also a preparatory step for a future where setup bills could be created directly within DSS, making the process more comprehensive and controlled. ### Determining Build Types: A Case Study in Flawed Logic A concrete example was examined to illustrate the problems with the current logic. The discussion focused on how the system determines a build's type (e.g., full service, distribution, or supply) and the downstream errors this causes. - **Current Flawed Methodology**: The system currently determines build type based on the service type listed on a bill, which can be incorrect or misleading, especially since a single bill can contain both distribution and supply components. - **The Correct Approach**: The build type should be definitively known and defined within the account's build template from the initial setup. Relying on the template would prevent misclassification. - **Example - Natural Gas Charges**: The logic that "wellhead use" is always applied for natural gas is incorrect. This should only apply if the build template specifies a *supply* or *distribution* type. Using the template would automatically enforce this correct rule. ### Addressing the DSS Audit Queue and Data Routing The conversation shifted to immediate, tactical issues regarding bills stuck in the DSS audit queue and how to route them for resolution effectively. - **Clearing the Audit Backlog**: There is a need to manually push invoices currently stuck in the DSS audit status over to the Data Audit system for operator review, similar to a process executed the previous week. - **Including Critical Error Context**: When routing these items to Data Audit, it is essential to include the specific error message or observation details in the operator notes field. This provides crucial context for the audit team to understand and fix the issue quickly, as this error information is not currently being passed through automatically. - **Scripting the Solution**: A script will be adjusted to facilitate this data move. While a more permanent, automated integration is the ideal long-term solution, the immediate fix involves updating the script to include the error details in the transfer. ### Next Steps and Implementation Planning The meeting concluded with a plan for executing the discussed strategies, balancing immediate fixes with longer-term architectural improvements. - **Immediate Action on Audits**: The first action is to run a script to transfer DSS audit items to Data Audit, ensuring all relevant error information is included in the operator notes for effective resolution. - **Investigating Template Integration**: Following the audit cleanup, the next major task is to investigate how to pull the build template data from DDS and integrate it as a safeguard within the DSS data capture process. This requires understanding the technical access and methods for retrieving this template information. - **Preparatory Work for Reporting**: Work will begin on creating a report template. Once the template is designed, the development team can proceed with its implementation to improve visibility and tracking.
Bill Template Review
## Summary ### Initial Technical Review of Data Batches The meeting opened with an immediate technical review concerning the visibility and count of items within data batches, revealing a discrepancy in displayed totals. - **Resolving Data Display Issues:** A primary issue was identified where the interface appeared to show only a limited number of items (e.g., 30). The resolution involved manually scrolling to the bottom of the list, which then displayed the full count, revealing batches of 50, 40, and 15 items. - **Verification of Batch Counts:** There was a specific focus on verifying the exact number of items in the batches, with initial counts of 17 or 16 items noted before confirming the final numbers through the scrolling action. ### Transition to Core Development Agenda Following the resolution of the immediate data view issue, the discussion intentionally pivoted to the central, planned topic for the session. - **Focusing on Systematic Problem-Solving:** A deliberate decision was made to shift focus onto tackling project challenges sequentially, one at a time, indicating a structured approach to development priorities. - **Introduction of the Build Template Topic:** The main objective of the meeting was introduced: to discuss and define the specifications or requirements for a "build template," which is evidently a key upcoming development task. ### Planning for Knowledge Transfer and Reference Significant emphasis was placed on documenting the forthcoming discussion for practical implementation and future reference. - **Intent to Record for Developer Reference:** Explicit plans were made to record the detailed conversation about the build template. The intent is to provide a clear, actionable reference for the developer tasked with working on it, underscoring the importance of accurate knowledge transfer. - **Technical Coordination for Recording:** A brief logistical exchange occurred to establish the method of recording the call, confirming that the technical setup was a necessary precursor to diving into the substantive build template discussion.
DSS-UBM Alignment
## Summary The meeting primarily focused on addressing critical data quality issues within a utility bill management system, particularly concerning natural gas consumption reporting for specific client locations. This led to a broader discussion about systemic improvements needed to prevent such issues in the future and the integration of different data platforms. ### Initial Project Context and API Considerations The conversation began by acknowledging ongoing Q1 planning and several technical challenges, including API development and defining a central source of truth for data, which set the stage for the deeper data quality discussion. - **Legacy System Issues and API Integration:** Concerns were raised about outdated data and inconsistencies between systems, specifically questioning the "central source" for bill data and the relevance of virtual accounts in a new "bill pay world." - **Resource Allocation for Technical Debt:** There was a strategic discussion about leveraging developer resources from a migration project to help with broader system-wide improvements, including "interpreter" issues on the Data Services (DSS) side. The goal is to have new team members solve problems others are already learning, rather than spreading existing teams too thin. ### Addressing Immediate Data Formatting and Reporting Issues The core of the meeting involved troubleshooting immediate problems with generating accurate client reports, highlighting a disconnect between data extraction and reporting formats. - **Report Generation Challenges:** A specific task involved updating a custom report with new data, but a formatting mismatch was encountered. The exported data file contained two extra hidden columns ("C site" and "year") not present in the destination report template, making a simple copy-paste operation impossible and requiring manual, cherry-picked data transfer. - **Action Plan for Data Verification:** The immediate plan was to update the report data back to January to ensure historical accuracy. A key step is to compare the newly populated report data against the "raw" data source to validate consistency before client delivery. ### Investigating and Remediating Data Quality Problems Significant time was spent diagnosing the root cause of incorrect data appearing in client-facing reports, tracing it back to source system configuration errors. - **Root Cause - Incorrect Observation Types:** The primary issue was identified as misconfigured "observation types" or "service descriptions" for certain natural gas bills. For example, a "Wellhead" job type was incorrectly assigned instead of the correct "Use" type, which caused consumption and cost data to be hidden or reported as zero in the monthly views. - **Impact and Validation of Fixes:** Correcting the observation type at the source immediately fixed the data displayed in the internal platform (UBM), confirming the diagnosis was accurate. This served as validation that the correction pathway was correct. - **Need for Operational Audit:** To fully resolve the issue, an operational (Ops) team review is required. They must audit all bills for the three affected locations to ensure the "observation type" aligns correctly with the "vendor type" for each account, a manual but necessary cleanup step for historical data. ### Resolving Ambiguous Unit of Measure Data A secondary but important data integrity issue was discussed regarding missing unit of measure information on bills, which complicates validation. - **Missing Bill Information:** For some bills, the unit of measure (e.g., therm) is not explicitly stated on the bill PDF, forcing the use of an educated guess. This creates a risk if a client were to question the basis of the calculations. - **Proposed Internal Solution:** To mitigate this, an internal reference spreadsheet was proposed. This would be maintained by the data services or Ops team, containing known or negotiated rates (dollars per therm) for clients, allowing for reasonableness checks and more confident data handling even when source documents are incomplete. - **Communication Strategy:** It was advised to avoid highlighting this particular unit-of-measure ambiguity in client communications for now, as no systematic change had been made. The focus should remain on fixing the observable data issues. ### Long-Term Systemic Solutions and Integration The meeting concluded with a forward-looking dialogue on preventing these issues through better system architecture and data flow between platforms. - **The Core Gap: Missing "Bill Template" Data:** A fundamental flaw was identified: the Data Services (DSS) interpreter system currently processes each bill in isolation without knowledge of the account's intended configuration or "bill template" (e.g., correct service type, observation type). This lack of context is why it perpetuates incorrect data if the source is wrong. - **Immediate and Long-Term Integration Plans:** The immediate priority is to extract the "bill setup" logic from the legacy system and incorporate it into DSS to serve as a reference template. The long-term vision is to create a single, shared "bill template" per account that is authoritative for both the DSS and UBM platforms, ensuring consistency. - **Architectural Debate on System of Record:** A key discussion point was whether UBM or DSS should be the ultimate source of truth for the bill template. The consensus leaned towards building the robust template logic within DSS first, as it is the data ingestion layer, and then integrating it with UBM to ensure operational changes in UBM are reflected back to DSS, closing the loop.
Billing Stabilization Plan
## Summary The meeting centered on critical operational challenges in billing, data processing, and client audits. Discussions revealed deep concerns about temporary fixes, systemic data accuracy problems, and the strain of managing high-volume client accounts. ### Challenges with Billing Adjustments and Past-Due Processing A core discussion revolved around the methodology for handling billing adjustments, specifically for older accounts (2015-2016). There is significant apprehension about continuing a particular adjustment process as a long-term solution. - **Questioning a Temporary Fix:** The current method of inserting adjustment lines to reconcile totals is viewed as unsustainable without addressing the underlying root cause of the discrepancies. The concern is that this approach is creating more data cleanup work for the future. - **Accuracy Issues Discovered:** The adjustment logic has been found to be flawed in certain scenarios, particularly when past-due amounts are involved. The **total amount due** captured by the system is sometimes incorrect, which in turn makes the automated adjustment amount invalid. - **Suspected Root Cause:** A potential cause identified is the *order of operations* in the system. If a past-due amount is removed first, it may cause the system to recalculate other figures (like current charges), leading to a mismatch with the initially captured data. This highlights a need to review whether checks should be against the *subtotal* versus the *total*. ### Audit Process and Reporting Struggles The team navigated a difficult audit process, highlighting gaps in preparation and the pressure to provide compliant documentation under tight constraints. - **Creative Compliance for an Audit:** To satisfy an auditor's request to demonstrate that 24 specific accounts were not missing, a non-standard report modification was made. A simple "yes/no" column was added to an onboarding sheet as a last-resort solution to meet the demand for documented evidence. - **Frustration with Audit Methodology:** While the audit uncovered some legitimate reporting oddities that required investigation, much of the process was considered inefficient. Auditors focused on a high number of already-deleted bills and made selections that seemed nonsensical, such as an account that had already been resolved. - **Underlying Data Problems:** The audit underscored a larger issue: a significant number of accounts (reportedly over 100) are not present in the platform but were not flagged by the auditors, who only identified 24. This points to a major data integrity gap that the team is aware of but was not required to fully address in this audit cycle. ### Systemic Issues in Data Processing and LLM Training Technical limitations of the current data processing framework and AI models were a major point of concern, directly impacting data quality and report accuracy. - **Observation Type Mismatches:** A critical error was identified where the **observation types** (e.g., full service, supply, distribution) on bills were incorrectly mapped because the system was not referencing the correct source data from the setup bill. This misclassification has caused widespread inaccuracies in reports for bills processed from March through August. - **Limitations of the LLM Workflow:** The practice of providing all processing instructions to the Language Learning Model (LLM) in a single file is degrading its performance. The more instructions added for edge cases, the lower the overall accuracy becomes. The solution proposed is to separate instructions by **bill type**, though this is acknowledged as a time-consuming change. - **Legacy of Manual Errors:** The problem is compounded because many of the foundational bills were entered manually. Subsequent automated (DSS) processes then propagated the same incorrect logic, cementing the errors into the system. ### Managing High-Volume Client Accounts The operational burden of processing daily invoices for large clients is creating resource allocation challenges. - **Overwhelming Daily Volume:** Accounts like Ascension are receiving over 200 invoices per day, requiring dedicated contractor resources solely to keep pace. This volume is so high that even with focused effort, it is difficult to clear the daily intake. - **Impact on Other Work:** The constant high-volume work for these key accounts ties up personnel, making it challenging to address issues for other clients or work on systemic improvements. The Medline account was cited as an example where the issue isn't complexity, but rather the constant shifting of human resources away from it to other priorities. ### Rebuilding Client Trust in Upcoming Reviews The team prepared for sensitive client meetings aimed at addressing billing errors and restoring confidence in their processes. - **Addressing Specific Billing Errors:** An imminent meeting was scheduled to review specific bills where miscaptures and incorrect unit measures led to inaccurate client reports. The goal is to clean up the data for a few key locations and, more importantly, use the session to **rebuild trust** with the client. - **Combating the "AI Error" Narrative:** There is ongoing frustration that clients often blame errors on AI when, in many reviewed cases, the mistakes were traceable to human actions or process gaps (e.g., a human-driven double charge). The team is actively working to correct this perception by providing clear evidence of root causes. - **Broader Control Concerns:** These repeated incidents lead clients to question whether adequate controls are in place to prevent billing errors. The team must continually demonstrate that they understand the failures and are implementing systematic fixes, not just one-time corrections.
Bill Template Review
## **Summary** The meeting centered on resolving critical data discrepancies in a utility billing system, specifically concerning natural gas usage reports generated for a partner. The core issue was that certain consumption data was not being correctly captured in monthly reports, leading to potential billing inaccuracies. The discussion involved a deep technical dive into system configurations, data mapping, and a live troubleshooting session to identify and correct the root causes. ### **Verification of Data for Monthly Billing Reports** The primary goal was to confirm that corrected data was ready for the monthly billing process run by an external team. A review had been conducted on problematic data for three specific locations. The consensus was that once internal data verification was complete, the external team could proceed with generating the report. However, a cautious approach was emphasized, seeking confirmation that the underlying data issues had been genuinely resolved before finalizing the billing run. The complexity stemmed from the data not being a simple dump but involving manipulation of fields like location description and vendor bill ID. ### **Technical Demonstration and Data Alignment** A screen share was initiated to compare the structure of the source report from the external partner with the internal system's data format. Key findings included: - **Column mismatch:** The external report contained additional columns (e.g., Calendar Month, Billing ID, Virtual Account) not present in the internal data view. - **Data insertion method:** It was determined that the process involved manually inserting new monthly data above existing rows in a spreadsheet, rather than a full refresh, which required careful handling to maintain data integrity. - **Resolution approach:** The immediate plan was to manually align the data by copying corrected figures (specifically from April) into the appropriate spreadsheet tabs, though this was acknowledged as a temporary, time-consuming fix. ### **Understanding Consumption Calculations and Service Types** The group analyzed why specific natural gas bills showed zero or incorrect total consumption in reports. The investigation revealed a fundamental misalignment between the service type and the observation code used: - **The core problem:** Bills marked with a "full service" vendor type were using observation codes designated for "supply-only" data (e.g., "wellhead" codes). This caused the system's calculation engine to assign the usage value to a "generation consumption" metric instead of the "total consumption" metric. - **Impact on reporting:** Consequently, the monthly usage report would show a zero or blank in the Total Consumption column, while the actual usage value sat in the Generation Consumption column. The partner's reporting logic, which uses the greater of the two consumption values, was failing because the correct data was in the wrong column. - **Diagnostic tools:** The "Monthly View" and "Calculated Values" tabs within the location details were highlighted as essential tools for diagnosing these issues. The "Calculated Values" section clearly shows whether an observation contributes to "total consumption" or "gen consumption." ### **Correcting Observation Types and Immediate Fix** A live edit was performed on a sample problematic bill to demonstrate the correction. - **The fix:** The service description/observation type on the bill was changed from "wellhead UC" (a supply-only code) to "use UC" (a code appropriate for full-service accounts that include both usage and charges). - **Immediate result:** After the change, the system's calculated values immediately updated, showing the usage correctly allocated to "total consumption." This change would then flow correctly into the monthly view and the generated reports. - **Scope of the issue:** The problem was identified as affecting multiple natural gas bills from a specific vendor (Constellation) across several locations. The immediate action item was to manually audit and correct the observation types for these known problematic bills. ### **Systemic Issues and Long-Term Solution** The meeting concluded by addressing the need for a permanent solution to prevent recurring errors. - **Root cause:** The error originates in the data ingestion pipeline (DSS/Data Services), where a "bill template" logic is incorrectly assigning "wellhead" observation codes to all natural gas bills, regardless of their actual service type (supply, distribution, or full service). - **Template disconnect:** There is a noted lack of synchronization between the validation rules and bill templates in the legacy Data Services system and the new UBM system. This gap allows incorrect data mappings to pass through without checks. - **Proposed future state:** A long-term solution requires updating the mapping logic in the data ingestion system to respect service type when assigning observation codes. Furthermore, there was a suggestion to eventually create a shared "bill template" mechanism between Data Services and UBM to ensure consistent rules and validations across both platforms, preventing similar systematic errors.
Reports / Data Needed for Success in 2026
## Customer The customer is an energy or facilities management professional responsible for monitoring and reporting utility usage, specifically natural gas, across multiple locations for their organization. They operate in a complex environment with data sourced from multiple vendors (e.g., Constellation, another unspecified vendor) and rely on the product to process, convert, and report this data accurately for financial and operational oversight. ## Success The primary success with the product lies in its ability to centralize and process utility data from disparate vendors into a unified reporting system. This allows for the identification of significant anomalies, such as detecting instances of zero natural gas usage coupled with substantial financial payments, which are critical for catching billing errors or operational issues. The system's configuration to handle different units of measurement (like therms and MMBTU) and perform conversions is also a foundational strength that supports data integrity. ## Challenge The most significant challenge centers on data accuracy and validation processes, particularly when dealing with historical data imports and vendor-specific discrepancies. A specific issue was identified where natural gas usage data for certain locations from one vendor may have been imported with an incorrect unit of measure (MMBTU instead of therms), which directly impacts the accuracy of critical reports. Furthermore, there is a noted lack of automated, robust alerting within the product for "huge anomalies"-such as drastic, unexplained spikes in usage or cost between billing periods-which forces reliance on manual review to catch potentially costly errors. ## Goals The customer's key goals for using the product are: - To achieve complete and accurate population of all utility usage reports, ensuring all historical and current data is correct. - To verify and standardize the unit of measure for natural gas usage across all vendors and locations to guarantee report consistency. - To implement or enhance automated alerting thresholds within the product to proactively flag anomalous data patterns (e.g., zero usage with high cost, or massive month-over-month usage variances) without requiring manual data sweeps. - To efficiently audit and validate data for all utility locations, moving beyond a reactive approach to a systematic, ongoing verification process.
Bill Templates Review
## Summary ### Customer Complaint Analysis The meeting centered on investigating customer complaints about incorrect data in embed reports, specifically focusing on discrepancies in natural gas bill details. - **Dyersburg location bill validation**: An April bill showed 7,589 MMBTU usage with a $29,119 charge captured as "wellhead use cost"-confirmed as correct for natural gas supply accounts. The customer's report displayed this data inaccurately despite correct system capture. - **Second location data discrepancy**: A bill with meter readings 27,599 and 27,600 showed a usage of "1" unit charged at $30, but validation revealed it should be **$0 usage** with only a base fee. The unit of measure (therms vs. MMBTU) was unresolved due to conflicting historical setups. - **Third location (KDL) issues**: Consumption data (e.g., 2571 vs. 275B) was mismatched between physical bills and reports, requiring manual verification of meter IDs and billing details. ### Data Capture and Unit of Measure Challenges Significant time was spent diagnosing why certain values were captured incorrectly across multiple accounts. - **Natural gas classification rules**: Confirmed that "wellhead use cost" applies exclusively to *supply* accounts, while "use cost" is correct for *distribution* accounts. - **Unit of measure conflicts**: Some accounts had dual units (therms and MMBTU) without clear vendor documentation, risking miscalculations. For example, a 1-unit usage charge was invalidated because it represented an unmetered base fee. - **DSS limitations**: The system fails to auto-classify bill types (supply/distribution/full service) accurately, often assigning opposite categories. ### Bill Template Management Discussion covered how bill templates are created, maintained, and matched to incoming bills. - **Template determination**: Bills are mapped to templates using the combination of client, vendor, and account number-with account numbers standardized by replacing special characters with dashes and removing spaces. - **Persistence of template errors**: Once line items are added to a template (via DSS or manual setup), they become permanent and can only be *inactivated* (not deleted), causing legacy data issues. - **New vendor challenges**: Templates for new vendors require manual setup since the system lacks auto-recognition capabilities. ### System Integration Gaps Attendees identified critical disconnects between data systems affecting accuracy. - **DSS-DDIS misalignment**: DSS doesn't ingest key account attributes (e.g., bill type) from DDIS during setup, forcing post-processing workarounds that introduce errors. - **Account number handling**: Inconsistent formatting (e.g., spaces/dashes) between systems causes template mismatches, requiring dual-format solutions to satisfy downstream reporting. - **Missing bill downloads**: Non-bill-pay accounts (e.g., three specific locations) had undownloaded September bills, indicating a pipeline failure for postpaid customers. ### Strategic Improvements Proposed solutions focused on systemic fixes rather than one-time corrections. - **Validation protocol enhancement**: Implement vendor verification for ambiguous units of measure during data capture, especially for historical imports. - **Template logic overhaul**: Integrate DDIS data (bill type, observation types) into DSS validations to eliminate manual classification rules. - **Account standardization**: Adopt uniform account-number cleansing (strip spaces, replace specials with dashes) across all systems to improve template matching reliability.
Victra Questions
## Customer The customer, Victra, operates in the utility payment management sector. Their core function involves processing and paying a high volume of utility invoices on behalf of their own clients. Their background indicates they manage substantial financial reserves and require robust, scalable systems for payment automation, reconciliation, and detailed reporting. They are a sophisticated user, deeply engaged with the technical and operational nuances of the platform, and hold the service to high standards for accuracy, timeliness, and transparency. ## Success A significant success achieved for this customer is the architectural separation of emergency and non-emergency payment processing. This enhancement directly addresses a critical operational need by allowing the customer to clearly distinguish and reconcile different payment types. The work to split the associated AP files and fund pulls "to the penny" has been completed, providing a much clearer audit trail. Furthermore, ongoing development to visually separate these payments within the user interface is slated for completion soon, promising to further streamline their reconciliation workflow and provide at-a-glance clarity on payment statuses. ## Challenge The primary challenge centers on reconciliation timing and data visibility, particularly concerning emergency payments. While the funds for an emergency payment are pulled immediately, the corresponding AP file (which includes the processing fee) is generated later, often manually, causing a disconnect that hinders real-time reconciliation for the customer. Additional persistent challenges include managing uncashed checks over 30 days, where visibility into the reason for the delay (e.g., a utility only accepting checks) is limited. There are also specific issues with utilities like PG&E, where payments occasionally default to checks despite available electronic credentials, and a need for clearer SLA reporting and tracking for their payment processor's performance. ## Goals The customer's key goals for using the service are: - **Achieving seamless, real-time reconciliation**, especially for emergency payments, where fund pulls and AP file generation are synchronized. - **Eliminating manual workarounds**, such as operators manually adding fees to AP files, through greater automation and system integration. - **Gaining comprehensive visibility and reporting** on payment statuses, including a clear, actionable dashboard for tracking uncashed checks and the reasons behind their delays. - **Improving SLA transparency and accountability** through accessible metrics that show payment processor performance over time. - **Refining billing and reporting accuracy** by excluding mock bills from certain "missing bill" reports to reflect only actual, unpaid invoices. - **Ensuring high electronic payment success rates** by effectively managing and utilizing payment credentials for all utilities to avoid unnecessary check payments.
UBM Errors Check-in
## Onboarding Challenges and Account Management Issues Several accounts remain unonboarded despite prior efforts, with 24 specifically flagged as problematic. Vendor-supplied lists failed to include these accounts, causing discrepancies. A new onboarding process for NGX accounts is required, but manual verification revealed mismatched account numbers between systems, complicating reconciliation. Customers are now reporting billing inaccuracies beyond known issues like double payments, including unexpected errors emerging over extended periods. ### Billing System Improvements - **Accrual report fixes**: Virtual accounts aren't linking properly, causing incorrect usage projections. Adding billing account IDs to reports will allow indirect linking by matching cleaned vendor/account combinations (hyphens/spaces removed). This enables customers to: - Identify unlinked accounts for closure - Accrue correctly by commodity type - **Account validation gap**: When web download teams select the wrong account during bill processing, errors cascade through systems. Implementing a pre-audit check will flag discrepancies between selected accounts and bill-captured account numbers for operations review. ### Payment Processing Complexities - **Duplicate payment fallout**: While Medline duplicates are resolving via credits/refunds, Constellation faces significant unapplied credits due to supplier reluctance to issue checks. Entergy requires manual intervention after payments were applied to wrong accounts. - **Emergency payment workflow**: Transaction fees now generate individual AP files for amount matching, but timing mismatches persist. Payment processors delay fee pulls by 1-2 days due to compliance constraints, preventing real-time synchronization despite customer requests. ### System Integration Gaps - **Legacy vs. modern system conflicts**: Setup builds originate in DDIs (legacy) before UBM mapping, causing data inconsistencies like account number formatting differences. Sales teams often bypass established workflows, creating unvalidated accounts. - **Location mapping bottlenecks**: Teams manually map locations despite existing bulk loaders, wasting resources. Automation adoption is critical as new clients (e.g., Ascension) onboard. - **Data synchronization failures**: Hundreds of accounts exist in UBM but not in core systems or master lists, indicating unauthorized invoice processing without proper setup. This caused Ascension to discover 1,000+ unlinked accounts during audits. ### Vendor-Specific Operational Hurdles - **Victra account discrepancies**: Payment volume dropped sharply due to unreconciled location naming differences between client-provided data and utility records. Disconnects persist despite repeated utility outreach. - **Constellation
DSS Priorities Sync
## VPN Connectivity and Access Issues Participants encountered difficulties connecting to the VPN and accessing Microsoft accounts on personal devices. The error "your organization does not follow for this device" appeared during login attempts, blocking access to critical systems like DSS and FTG Connect. Workarounds were discussed, such as importing an XML configuration file for VPN access without logging into Microsoft accounts. However, this limited functionality for tools requiring Microsoft authentication. ## Ticket Status and Issue Resolution A double-counting ticket review confirmed no duplicate entries in three UBM bills, though line-item organization issues were noted. To resolve discrepancies, the team emphasized verifying CSV files from the portal: - **CSV validation process**: Downloading and comparing CSVs sent to UBM against source data to identify mismatches. - **Admin access utilization**: Leveraging portal permissions to directly retrieve CSVs for audit purposes. ## Development Updates and Blockers Progress on development tasks faced environment constraints: - **Script enhancements for Cosmos DB**: A new script was created to handle data without application changes, but encountered continuation token issues during record searches. Testing remained limited to a development database with partial records. - **FTG Connect development halt**: Local work stalled due to unresolved redirect URL dependencies in app registration, pending external support. ## Virtual Account Mapping Challenges Fundamental gaps in virtual account mapping logic were identified as a priority: - **Missing setup data integration**: DSS lacks access to initial account configuration details (e.g., supply/distribution/full-service flags) from DDS/UBM setups, causing validation issues. - **Data sourcing strategy**: Plans to pull account-specific setup information (e.g., service account formats, observation types) directly from DDS via existing SQL Server connections. This will enable consistent virtual account validation against customer expectations. ## System Enhancements and Task Prioritization Key operational improvements were outlined: - **Invoice reprocessing mechanism**: Implementing a daily cron job to handle failed invoices, moving beyond the current commented-out solution. - **Deferred tasks**: FTP folder tracking and DSS static uploads were prioritized over email tracking due to bandwidth constraints. - **Data verification urgency**: Immediate focus on reconciling UBM CSV files to close validation loops.
Agentic Finance GTM
## Background The candidate's career began in consulting, working with Fortune 500 companies and early-stage startups. They transitioned to fintech as the founding Product Manager at Finmark, a Y Combinator-backed startup focused on financial planning and analysis tools for startups and SMBs. Following Finmark's acquisition by Build.com, they moved to Constellation (a major US non-fossil energy provider) as a Senior Product Manager/Director, currently owning their utility management platform. All professional experience is US-based. ## Work Experience - **Finmark Tenure:** Joined as the first employee after founders and spearheaded product strategy from inception through acquisition. Key contributions included: - Identifying core market segments and developing the "20% features for 80% impact" approach to capture early adopters. - Leading the entire product lifecycle as the sole product manager initially, later managing a growing team (reaching ~24 offshore and 7 US-based engineers). - Driving the critical strategic pivot from solely serving startups to targeting SMBs, which significantly accelerated growth and led to acquisition. - Facing significant operational challenges, notably high turnover within the offshore engineering team due to unsustainable workload intensity, highlighting the difficulty of retaining contractors versus invested employees. - **Constellation Role:** Currently owns and directs the development of the utility bill management platform, brought in to lead after the previous director departed. Focuses on managing and evolving this enterprise-scale platform. - **Working Style:** Demonstrated a hands-on, validation-driven approach at Finmark, working closely with engineering and design teams daily. Emphasizes defining problems, validating solutions, and executing roadmaps. Learned the importance of sustainable team structures and the need for strategic flexibility (e.g., the pivot to SMBs). ## Candidate Expectations - **Role & Opportunity:** Actively seeking the right opportunity, particularly within the burgeoning Saudi startup ecosystem. Expresses strong interest in early-stage, high-impact roles. - **Working Arrangement:** Open to starting with a remote work arrangement, with flexibility regarding future location adjustments ("not set in stone"). Values the ability to contribute effectively regardless of initial location. - **Industry & Technology Focus:** Deeply interested in fintech and startups, specifically those leveraging AI. Believes AI integration is critical for competitive advantage ("if you're not embracing AI... you're going to be left behind"). Sees the current Saudi market as a "level playing field" ripe with opportunity. - **Company Stage & Impact:** Drawn to early-stage ventures where they can play a significant role in building and shaping the product and company, similar to their founding role at Finmark. ## Questions/Concerns The candidate raised several strategic and operational questions: - **Market Strategy:** Inquired about the focus on the local Saudi market versus the broader international (Commonwealth) strategy, asking specifically, "Are you starting with the local market though?" and probing the transition from local connections to scalable customer acquisition ("How do you go from connection to a customer?"). - **Competitive Landscape:** Questioned the existence of established competitors ("I'm assuming since you're saying that there isn't already an established company that does that yet") and sought clarification on the competitive differentiation. - **Product Focus & Validation:** Asked for specifics on the initial customer use cases and value proposition ("they're mainly there for the agentic services... What exactly what part of their business is it just knowledge? Is it more than that?"). - **Primary Challenges:** Identified and questioned the perceived biggest next challenge post-funding ("what's the next big challenge for you?") and probed potential solutions for the identified go-to-market challenge ("And the solution for that you think is white or still question mark for you?"). - **Team Building:** Ended the interview seeking clarity on current team composition and hiring focus for co-founders or key early roles ("What's the team composition now? Looking for the co-founder? What you looking for?").
UBM Planning
## Summary ### Issue Overview and Problem Statement The meeting focused on investigating a critical issue raised by the client (Hexal) regarding discrepancies in their natural gas usage data, where values in a custom report were significantly lower than those shown in the UBM portal and on actual bills. The core problem is that the report is displaying single-digit values for months where the usage should be in the thousands of therms, causing the client to believe the system is undercounting their consumption. ### Analysis of the Primary Discrepancy (TPA Location) The team conducted a detailed analysis of a specific example for the TPA location, focusing on the month of July. A key discovery was that the report's values are being misinterpreted due to unit conversion and bill calendarization. - **Unit Conversion Explained:** The underlying issue stems from how data flows through the system. The UBM portal displays usage in therms, but the client's custom report is pulling and presenting the data in **MMBtu**. The conversion from therms to MMBtu involves dividing by approximately 10, which explains why a reported value of ~54 therms in UBM appears as **~5.4 MMBtu** in their report. This is not a data error but a potential misunderstanding of the report's output unit. - **Bill Calendarization Clarified:** A related factor is the handling of bills that span multiple months. A single invoice's total consumption is split (calendarized) across the service months it covers. Therefore, the usage for a specific calendar month in the report is not a single bill's total but a pro-rated portion, which could appear unexpectedly low. ### Investigation of a Secondary Data Issue (Dyersburg Location) A second, more serious issue was identified for the Dyersburg location in April, where the report showed zero usage despite the bill containing a valid charge. - **Root Cause Identified:** The problem was traced to a **data classification error** within the bill's observations. The usage observation for this Constellation bill was incorrectly coded as a **generation consumption** type instead of a standard **total consumption** type. - **Impact of Misclassification:** Because the reporting logic sums usage based on specific observation codes, this misclassification caused the ~75.9 MMBtu (758.9 therms) of usage to be excluded from the total consumption calculations in the report. This constitutes a genuine undercount. ### Discussion on Historical Data and System Logic The conversation delved into the logic of how usage is classified and calculated within the system, highlighting a potential inconsistency in how historical versus live data is handled. - **Historical vs. Live Bills:** An investigation into past bills revealed that earlier ("historical") bills often used a simple `usage` observation, while more recent ("live") bills from the same supplier are being mapped with a `gen` (generation) observation code, even for full-service accounts where this classification might be incorrect. - **Need for Process Alignment:** This inconsistency suggests the problem with the Dyersburg bill is not a one-off anomaly but could be a recurring issue tied to the mapping logic from the data source (DSF). It necessitates defining the correct observation type for various billing scenarios to prevent future discrepancies. ### Resolution Path and Next Steps The team outlined a two-pronged approach to resolve the issues and prevent recurrence. - **Immediate Fix for the Dyersburg Bill:** The specific bill with the misclassified observation needs to be corrected by an admin user editing the bill to change the observation type from `gen` to the appropriate code for total consumption, which will allow it to be included in report totals. - **Long-Term Analysis and Rule Definition:** To address the root cause, a deeper analysis is required involving subject matter experts (like "Afton") and operations teams. The goal is to: - Clearly define the expected observation type for natural gas usage in different contexts (e.g., full-service vs. supply-only). - Understand and potentially correct the mapping logic from the data supplier to ensure future bills are classified correctly upon import. - This will help create a standardized rule to avoid similar misclassifications going forward.
Daily Progress Meeting
## **Progress on Bill Data Extraction and Setup** This meeting focused on reviewing the status of data extraction from utility bills, updating the project timeline, and addressing dependencies for the upcoming client update. ### **Status of Non-EDI and EDI Bill Collection** The initial phase of downloading and organizing the raw bill data is nearly complete. A script has been run to separate bills into two key categories for processing. - **Non-EDI Bills:** A total of 7,412 non-EDI bills have been successfully pulled down and are sitting in a designated folder, ready for the next step. - **EDI Bills:** Work is underway to collect approximately 848 known EDI bills into a separate folder. This task is expected to be finished soon. These bills require a different, more time-consuming setup process. - **Image Link Resolution:** An issue concerning roughly 560 bills with missing image links has been resolved. It was confirmed that most of these were summary bills, and the actual PDFs have been received from the customer, so they do not need to be pulled via the script. ### **Updates to the Project Timeline and Gantt Chart** The project's Gantt chart was reviewed and updated to reflect current progress and newly identified tasks, ensuring an accurate picture for the upcoming client meeting. - **Completed Task:** The activity for downloading non-EDI bills was marked as 100% complete. - **New Critical Task Added:** A new line item was created for "Vendor Sender Setup / Bill Loading," which is the work currently being performed by Alberto. This task is a prerequisite for the data extraction phase. - **Timeline Adjustments:** The start date for the "UVM Account & Attribute Setup" task was adjusted, as it was discovered the work had not yet begun as previously scheduled. The team is confirming if a bulk upload tool can be used to accelerate this task, which would prevent a schedule delay. ### **Dependencies and Resource Challenges** A significant portion of the discussion centered on workflow dependencies and the need for additional resources to maintain the schedule. - **Vendor Setup Bottleneck:** The process of setting up vendors for the bills is a manual task currently handled by a single team member (Alberto). This work must be completed before data extraction can begin on those specific bills. - **Resource Strategy:** To avoid delays, the team is exploring assigning an additional resource to begin data extraction on bills where the vendor setup is already finished, allowing work to proceed in parallel. - **Bulk Upload Potential:** A major opportunity to save time was identified. The team is checking if a bulk upload tool can be used to load all location data into the system, which would be far faster than manual entry. Confirmation on the functionality of this tool is pending. ### **Legal and Contractual Items** Progress on necessary legal agreements was discussed, with some uncertainty around timelines. - **LOA (Letter of Agreement) Status:** The LOA with the legal team is still pending. The timeline for its completion is somewhat flexible, as it is not needed until a notice period begins in mid-January. However, there is a desire to finalize the red-line review process to avoid last-minute delays. - **FBO Setup:** Questions regarding the setup of FBO (likely Freight Bill of Lading) documents are being handled by another team. It was noted that the contact, Tim, is working on it and will reach out if any clarifications are needed from this group. ### **Preparation for Client Reporting and Next Steps** The team planned for an upcoming discussion about custom reports and solidified action items. - **Custom Reports Discussion:** A separate meeting is needed to dive into the mapping and specific requirements for custom reports. This meeting will involve key personnel from both the data and business sides to define scope and prevent uncontrolled expansion of the reporting requirements. - **Immediate Actions:** Assignments were made to follow up on the bulk location upload tool and to schedule the custom reports mapping meeting. The updated Gantt chart will be used to communicate progress and current timelines in the next client update.
DSS IPS Infra Access
## **Summary** The meeting focused on reviewing ongoing technical work, addressing several persistent issues, and planning for future infrastructure needs. Key discussions revolved around debugging a recurring permissions system failure, progress on development tasks, the necessity for a production-like testing database environment, and the potential re-enablement of a previously disabled system report. The team also emphasized the importance of formal work tracking. ### **Ongoing Technical Issues and Debugging** The team addressed several critical technical problems requiring immediate attention and investigation. - **Recurring Permissions Assignment Failure:** A significant permissions system bug, which was believed to be fixed the previous week, has resurfaced. This issue prevents developers from properly viewing and accessing key application components like the download of File Exchange. The problem involves complex database permission flags that are not updating correctly through the application, necessitating a deeper investigation to find a permanent solution. - **SQL Database Performance Incident:** A recent incident where the Bills table was being locked, degrading performance, was traced back to a specific developer's machine. The connection from that machine has been blocked as a resolution. ### **Development Work Updates and Tracking** Progress on assigned development tasks was reviewed, with a focus on accountability and visibility. - **URL Redirection for Web Application:** Work was completed to add URL sign-in and redirection functionality for the web application on port 5000 for two other developers. - **Invoice Splitting Feature:** Development on the invoice splitting feature has not yet started due to the developer being occupied with resolving the aforementioned permissions issue and other unrecalled tasks from the previous week. - **Importance of Jira for Tracking:** The team stressed the critical need to log all work and issues in Jira. This formal tracking system is essential for monitoring progress, providing visibility to stakeholders, and ensuring tasks are not lost or deprioritized when immediate issues arise. ### **Infrastructure and Database Strategy** A primary discussion centered on creating a robust testing environment that mirrors production to prevent future issues. - **Need for a Cosmos DB Replica:** There is a strong requirement for a replica of the production Cosmos DB to allow offshore developers to perform accurate testing with real production data before deploying changes to higher environments. This need arose from past issues where testing without production data failed to catch problems. - **Evaluation of Replica Options:** Three approaches were identified for evaluation: 1) simply reading directly from the production database (which carries a risk of performance impact), 2) creating a read-only replica, or 3) provisioning a full, separate testing replica. The team will research the cost-benefit analysis of these options, considering that the current production Cosmos DB is on autoscale, which can handle query spikes but at a variable cost. ### **System Reporting and Monitoring** The team discussed the status of internal reports and monitoring tools. - **Re-enabling the System Status Report:** The "builds" system status report, which was taken offline, is now considered safe to re-publish. The earlier performance issue was linked to a specific machine's activity, not the report itself. The plan is to re-enable the report and monitor closely for any adverse effects on database performance. - **Creation of a New SQL Database for Development:** A question was raised about the effort required to create a new database on the SQL Server for development purposes. While the exact effort is unknown, it was noted that production uses a different server (`SQL01`) than development, which is on a managed instance. The task was directed to a specific team member (`Andrew`) for execution. ### **Access Requests and Next Steps** The meeting concluded with a review of pending access requests and immediate next actions. - **Pending Access for NDS Products:** A developer (`Is`) has requested access on port 502, which is noted as a required action. - **Immediate Action Plan:** The immediate priorities are to fix the broken permissions system, research the Cosmos DB replica options, re-enable the system status report with monitoring, and address the pending port access request. The team planned to follow up on the database strategy in the next discussion.
Faisal <> Sunny
## Summary The meeting centered on addressing persistent operational challenges within the bill payment system, planning the strategic direction for the upcoming year (2026), and resolving immediate data processing issues with a key vendor. ### **Immediate Data Processing Issue with Constellation** A specific, urgent problem was identified where invoices from the energy vendor Constellation were producing incorrect data, likely due to failures in the Optical Character Recognition (OCR) and Large Language Model (LLM) pipeline. - **OCR/LLM System Failure:** The discrepancy between the vendor's bill and the data captured in the UI is suspected to be a recurring issue with Constellation, attributed to their invoice's complex formatting (use of colors, boxes, and images) which confuses the automated data extraction system. - **Proposed Immediate Solution:** For the affected customer, a manual data override was agreed upon as the quickest fix. There was a consensus that the operational team should handle this correction internally rather than asking the client to do it. - **Long-term Strategic Fix:** A firm decision was made to move Constellation's data processing away from the current DSS (Document Schema Service) system entirely. Since Constellation provides data in XML format, the plan is to develop a direct, non-DSS pipeline to eliminate the error-prone OCR step for this vendor, which was described as an "embarrassing" necessity currently. ### **Broader Systemic and Operational Challenges** The discussion revealed deep-seated operational inefficiencies that are consuming developer bandwidth and hindering product development. - **Unsustainable Workload:** The current model requires developers to spend a significant majority of their time on firefighting and manual operational tasks (e.g., writing SQL queries for ad-hoc research) instead of building product improvements. This is directly impacting the ability to fulfill feature requests from sales and customers. - **Azure Migration Overhead:** A major ongoing project to migrate infrastructure from Google Cloud Platform (GCP) to Azure is consuming roughly 30% of development capacity. While necessary, this "lift and shift" project is a major distraction that doesn't deliver direct customer value or improve efficiency. - **Compliance Duplication:** The cloud migration is further complicated by compliance audits (like SOC 2), as controls must be re-evaluated and documented for both the old (GCP) and new (Azure) environments, effectively doing the work twice. A shift to cloud-agnostic audit partners was noted as a positive step to avoid this in the future. ### **Strategic Planning and Focus for 2026** Planning for the next year was a primary focus, with a clear emphasis on achieving stability before pursuing new features. - **Primary Goal - Stabilize Bill Pay:** The unanimous strategic focus for Q1 2026 is to achieve operational stability, specifically for bill pay customers. The core objective is to systematically address and eliminate the recurring issues that have plagued the past quarter, lowering the constant operational overhead. - **Stability vs. New Features:** Any product improvements pursued will be those that directly contribute to reducing the current operational burden. New feature development unrelated to this goal is expected to be pushed back. - **Proposed Structural Change - Dedicated Bill Pay Team:** A key proposal to quarantine issues involved creating a dedicated "bill pay" lane within the system. This would involve a separate UI view and a dedicated analyst focused solely on bill pay invoice issues, preventing them from cascading and consuming the entire team's bandwidth. - **Resource Allocation Philosophy:** The planning approach will involve assessing all required work (backlog, operational fixes, strategic projects) and then deciding how to resource it, without being constrained by current team structures. The idea of forming new, specialized teams to handle specific problem areas was floated as a possibility, rather than just distributing work across existing dev, CSM, and ops roles. - **Realistic Planning:** There was an acknowledgment that a significant portion (e.g., 40-60%) of any plan must account for unforeseen daily issues. The goal is to clearly define what work is non-negotiable and "not pushable," and to set correct expectations around deliverables based on this reality. ### **Analysis of Recurring Vendor Issues** The conversation highlighted a need for better analytics on system performance to identify patterns in failures. - **Vendor-Specific Error Rates:** A question was raised about whether the company tracks error rates at the vendor level to identify patterns, like the known issues with Constellation. This data is not currently analyzed but is recognized as valuable. - **Categorizing Known Issues:** The intention is to bucket common failure modes (e.g., formatting issues, missing date fields) to streamline troubleshooting and response when new problems are reported. The hope is that the number of these buckets does not grow significantly. ### **Next Steps for Strategic Alignment** Concrete actions were defined to formalize the strategic discussion and plan. - **Dedicated Planning Session:** A meeting has been scheduled for the following week to translate the high-level strategic focus into concrete milestones and a actionable plan for Q1 2026. - **Pre-Work and Alignment:** The participants will prepare by consolidating priority lists from different perspectives (product, daily operations, existing backlog) to ensure the plan is comprehensive and has full buy-in from all stakeholders before finalization.
Billing Integrity Review
## **Summary** The meeting focused on persistent operational challenges within the bill payment and data management systems, with discussions revolving around immediate firefighting for specific clients and the urgent need for underlying systemic improvements. Key themes included resolving duplicate payments, addressing chronic data mapping errors, redesigning internal reports for better visibility, and moving from reactive customer-specific fixes to proactive, root-cause solutions. ### **Current Bill Processing and Data Verification Backlog** A significant portion of the discussion centered on the manual effort required to process and verify a backlog of bills, particularly for clients like Victra and Ascension. While progress was being made, concerns were raised about the quality and risk of automated processing without manual checks. - **Manual Review Necessity**: For critical clients, there was a strong consensus that bills, especially those with flags like "process special" or threshold errors, must be manually reviewed before payment to avoid costly mistakes like duplicates or paying frozen accounts. - **Ascension's Threshold Errors**: A specific issue was highlighted with Ascension, where a batch of bills contained "threshold errors" that required manual resolution by their team, introducing potential delays if only one person was tasked with this. - **Resource Allocation**: The team was actively working through lists, with hundreds of bills remaining for Victra before shifting focus to other clients, underscoring the scale of the manual verification workload. ### **System Reporting and Data Structure Overhaul** The design and functionality of internal reports were scrutinized, with a plan to move from a static, cumbersome data dump to a dynamic, focused view that provides actionable insights. - **Dynamic Report Design**: The goal is to create a report that dynamically shows the status of the last month's bill and the current month's bill for each account, moving away from a sprawling multi-month data dump that is difficult to analyze. - **Key Data Points**: Essential columns for the new report include invoice date, due date, payment status, and commodity/utility type. It was noted that accounts can have multiple commodity types, which needs to be accommodated in the new structure. - **Foundation First Approach**: The priority is to establish a stable, working "base" report. Additional data points and tabs can be added later, but the initial build has been technically challenging. ### **Duplicate Bill Crisis and Financial Implications** A major pain point was the fallout from a batch of approximately 1700 bills that were processed without proper safeguards, leading to widespread duplicate payments. The Medline account was used as a primary example of the financial and operational mess created. - **Lack of Pre-Validation**: The duplicate payments occurred because critical validation checks-such as comparing the new bill's total amount and due date against the prior bill-were reportedly overridden or not in place during a bulk processing effort. - **Downstream Complications**: The duplicates have created a cascade of problems: credits sitting on customer accounts (e.g., a $15,000 credit applied to a future $47,000 bill), the need for cancellations and refunds, and general uncertainty about which bills actually require payment. - **Investigative Hurdle**: There is significant frustration and concern because there appears to be no accessible audit trail or backup documentation showing how the decision to process the 1700 bills was made or what checks were performed, making resolution extremely difficult. ### **Chronic System Mapping and Integrity Check Failures** The team delved into the technical root causes behind the ever-present "mapping" issues, where the system fails to correctly match incoming bills to the correct customer accounts in the database. - **Core Matching Logic Flaws**: The primary issue is that the system (UBM) requires vendor account numbers to match *exactly* with pre-mapped formats. Simple inconsistencies like leading zeros, dashes, or spaces cause failures, and there's no intelligent logic to standardize these variations. - **Recent "Fixes" and Their Limits**: Two recent system changes were noted: 1) enforcing checks for all six variables that make up a virtual account, and 2) adding new required fields. However, these have not reduced the volume of mapping errors. - **Suspicion of Manual Overrides**: A theory was proposed that operational staff might be manually correcting account numbers in ways that the automated system (DSS) does not recognize, causing the same accounts to fail mapping repeatedly instead of achieving a "one and done" solution. ### **The Imperative for Root-Cause Solutions Over Patchwork Fixes** A recurring and emphatic point was that focusing on individual customer issues is unsustainable. The team acknowledged that without addressing fundamental system flaws, problems will simply migrate from one client to the next. - **Systemic vs. Symptomatic Fixes**: The conversation stressed that troubleshooting issues like Medline's duplicates in isolation does not solve the underlying problem; it only temporarily resolves a symptom for one client. - **Integrity Check Scope Creep**: The "integrity check" stage, initially intended for onboarding mapping issues, has ballooned. It now catches a variety of problems, including data verification (DV) issues where totals don't match, indicating a blurring of error resolution pathways and a need for clearer process definitions. - **Need for Transparent Logic Updates**: There was a call for better communication about ongoing updates to system logic (like DSS matching rules) to ensure the entire team understands why certain issues may be appearing or persisting. ### **Specific Customer Account and Audit Issues** Several discrete customer problems were highlighted, demonstrating the variety of operational fires the team manages. - **Victra Account Disconnect Problem**: A disconnect order was processed correctly based on a client-provided list, but the utility proceeded with the physical disconnect despite a later client request to cancel. This illustrates challenges with utility communication and timing that are outside direct control. - **Victor's Missing Accounts**: An audit inquiry was mentioned regarding 24 accounts that appear to have never been onboarded from an original list. The hope was that these accounts were simply omitted from the source list, but it points to potential gaps in the initial data handoff or ingestion process. - **Medline as a Test Case**: Given its smaller size (~300 accounts), Medline was suggested as a manageable test case to develop and apply a method for systematically untangling duplicate payment scenarios, as manually reviewing all 1700 affected bills was deemed impractical.
Daily Priorities
## Summary ### Customer Account Priorities and Statuses The meeting centered on assessing the current workload and anticipating challenges from key customers, with a particular focus on three major accounts. - **Med XL is identified as the next potential high-volume challenge:** The account has approximately 300 items in its queue. The complexity is heightened because some bills arrive before their official due date, making it difficult to prioritize and process them clearly. - **Victra requires further understanding and stabilization:** While some progress has been made, the team acknowledged that they still need to "get their arms wrapped around" the Victra account to manage it effectively and reduce pressure. - **Jaco is currently quieter but expected to escalate:** With 162 items pending, Jaco has not been as intensive yet, but there is a strong expectation that their volume will increase soon. The account has a significant number of unprocessed items from November. ### Invoice Processing Trends and Analysis A detailed analysis of billing data revealed unexpected patterns in workflow, prompting a re-evaluation of capacity planning and bottleneck identification. - **Surprising discovery of low bill volume on Tuesdays and Wednesdays:** Contrary to the assumption that weekends cause backups, data from the past three months shows a consistent dip in invoices received on Tuesdays and Wednesdays, with Monday typically bearing the brunt of piled-up work from Friday through Sunday. - **The analysis aims to understand true processing capacity and bottlenecks:** The goal is to move beyond simple monthly capacity and understand if concentrated bill arrivals create unsustainable peaks, even with adequate overall staffing. - **Data integrity and utility billing practices are questioned:** The team debated whether the trend could be real, considering that utilities don't coordinate invoice dates. Questions were raised about whether the data (based on invoice dates, not receipt dates) or internal processes (like formatting delays) could be skewing the results. ### BDE System Integration and Impact Significant concerns were raised about the effectiveness and return on investment of the BDE (likely a third-party data extraction) system, as its promised benefits have not materialized. - **The system's impact on reducing manual work is not evident:** Despite onboarding over 20 utilities-including major ones like Constellation-into the BDE system, the team has not seen a corresponding reduction in the manual web download workload. Requests for more web download resources continue, contradicting the expected efficiency gains. - **Questions about account synchronization and process gaps:** A key uncertainty is whether all new customer accounts are being automatically sent to BDE for management. If this process is still manual or incomplete, it would explain why the system isn't alleviating the team's daily tasks as intended. - **A fundamental reassessment of BDE's value is prompted:** The discussion led to a pivotal question about whether continuing to invest in adding more utilities to BDE is worthwhile if it does not tangibly free up team resources or reduce operational pressure. ### Team Updates and Current Workloads The meeting included updates on team availability and the redistribution of responsibilities to cover absences and manage ongoing issues. - **Key team members are out this week:** Mary is out for the entire week, and Elizabeth was out sick on the day of the meeting, requiring others to monitor specific communication channels like the EBM chat for urgent questions. - **Contractor resources are narrowly focused:** Two contractors are dedicated exclusively to the three high-priority accounts (Victra, Med XL, and Jaco), but with over 600 items already in the queue, there is a recognition that this focused capacity may still be insufficient if all accounts surge simultaneously. - **Responsibility for monitoring specialized queries is delegated:** Tim was asked to keep an eye on the EBM chat for complex questions, particularly regarding AP files, that may arise in the absence of other team members. ### Ongoing Projects and Process Improvements The conversation covered several ongoing technical and procedural initiatives aimed at resolving persistent issues and improving system accuracy. - **Refining the bill processing and payment application logic:** A repeat of a process run by Ruben, which accidentally caused some duplicate payments, is being attempted with a revised query. The new approach aims to more accurately identify the true "prior bill" to prevent such errors when pushing large batches through for payment. - **Dealing with persistent audit inquiries on the Victra account:** The audit team continues to ask detailed follow-up questions about Victra, creating an additional reporting burden. The team expressed frustration, noting they had already provided data that was very close to accurate and now must investigate minor discrepancies. - **Shifting team focus from data cleanup to value-added work:** If the automated DB push process is successful, it should free up the team to work on more strategic tasks, such as improving utility-to-account mapping, rather than manual data correction. - **The immediate goal is to prevent service shut-offs:** A primary operational objective is to process enough bills to avoid any customer service disconnections, with specific concerns about pending items for the Jaco account from November.
Sample Bill Retrieval
## Summary ### Core Objective: Sourcing Sample Utility Bills The primary purpose of the meeting was to fulfill a request for sample utility bill PDFs from different vendors and commodity types. The goal is to provide these samples to an external party, Arcadia, who is awaiting the data. ### Methodology for Data Retrieval The process for gathering the required samples involves a two-step technical approach. First, a query must be executed in Cosmos to identify the specific bill IDs that match the desired criteria. Subsequently, these bill IDs will be used to locate and retrieve the corresponding PDF files from the search indices. ### Specific Requirements and Criteria The request has very specific parameters regarding the type and number of bills needed. - **Electricity Bills:** A total of five samples are required from specific vendors, including Duke and PGE. - **Gas Bills:** A total of five samples are required from vendors such as BGE and potentially Constellation. - **Water Bills:** Two samples are needed, but this part of the request was handled separately as the vendors for water had already been identified. ### Constraints and Clarifications Several challenges and limitations were identified during the discussion. It was confirmed that the current system interface does not allow for direct filtering of bills by their commodity type (electricity, gas, water), necessitating the use of a backend Cosmos query. Furthermore, matching bills to specific vendors is complicated because the vendor information stored in the system (e.g., vendor code or client name) may not exactly match the names as they appear on the bills themselves. One vendor listed in the initial request, "Direct," was unclear and was likely to be ignored.
November Review, December Plan
## Summary The meeting primarily focused on reviewing the progress of work completed in November and planning for upcoming deliverables in December, with a particular emphasis on reporting, payment processing systems, and bug fixes. ### November Work Review A comprehensive review of the work completed in the month of November was conducted, confirming the closure of several key initiatives. - **Invoice Processing**: This item is now considered complete. The foundational monitoring capabilities have been successfully implemented, providing the necessary visibility into system capacity for future operations. - **Emergency Payments**: The development work for the initial phase, specifically for Victra, is technically complete. A minor bug was identified regarding the generation of unique numbers for multiple emergency payments processed on the same day, but this does not prevent the overall item from being marked as done pending final sign-off. - **Unplanned Work**: A significant portion of the team's time, estimated at 20-30%, was dedicated to unplanned activities. This included addressing various bug fixes and completing four new reports that were requested by different team members during the month. ### December Planning and Prioritization The discussion then shifted to the work planned for December, with a focus on establishing realistic timelines and prioritizing tasks. - **PayClearly Credentials & Data**: This is a high-priority item that needs to be completed in December. The work involves ensuring real-time data pullback and accurate payment status reflection within the PayClearly platform, which is a dependency for other features. - **Data Validation Errors**: For most validation errors, short-term fixes have already been implemented. However, the development of long-term, more robust solutions is not within the scope for December due to capacity constraints. - **Reporting Requests**: Several new reports were discussed for December development. The priority is on the "update processing time report" and the "system status report." The requester will finalize the requirements for these and other views to enable the development team to begin work promptly. ### Future Work and Dependencies Several items were identified as requiring further discussion or were moved to a later timeline due to dependencies or resource availability. - **Bank Initiation for Emergency Payments (Phase 2)**: The second phase of emergency payments is dependent on the PayClearly platform. A follow-up meeting is required to determine the best approach, which may involve either an API integration or a process change. This item is not anticipated to be completed in December. - **Upcoming Quarter Planning**: It was suggested that a future meeting should be dedicated to planning for the next quarter (January-March) to provide a clearer long-term roadmap and better estimate delivery timelines for outstanding projects. The creation of a team charter was also mentioned as an ongoing task.
Simon Onboarding Plan
## Project Kick-off and Initial Planning The meeting served as a formal kick-off for the Simon client project, introducing a dual-role project manager focused on both client support and process efficiency. Two key workflow meetings are scheduled for the following week to deconstruct the end-to-end process into a visual diagram, aiming to identify risks and integration points. The immediate objective is to demonstrate tangible progress by next week, with a primary focus on initiating file ingestion and portal setup. ## File Ingestion and EDI Account Strategy A central challenge involves processing approximately 8,800 accounts, which includes a mix of standard invoices and Electronic Data Interchange (EDI) accounts. A script is actively scraping image links for these accounts, with about 5,500 completed and 2,000 remaining, targeting completion by the next day. A critical strategic decision was made to proactively separate EDI accounts before loading, as loading them would be inefficient and their notes are often deficient. This approach will allow the team to focus on the larger volume of non-EDI accounts to show quicker progress, with a separate workstream established to handle the EDI accounts later. - **Separating EDI Accounts:** The plan is to parse out the estimated 2,157 EDI accounts upfront using the script, preventing the onboarding team from wasting time on them and ensuring a cleaner, more efficient process for the initial load. - **Constellation Accounts:** Approximately 300 accounts are with Constellation, and these will be handled through an internal resource, as the actual PDFs can be retrieved from internal systems rather than relying on the scraped image links. ## Portal Setup and Onboarding Process The team is preparing to begin the manual onboarding process for the non-EDI accounts, with the scraped PDFs currently stored and ready for loading. A team member, Alberto, has been assigned to start creating portals in the system (UVM). However, significant concerns were raised about the risks of portal creation. - **Risk of Breaking Credentials:** A major risk identified is the potential to accidentally "break" existing credentials for a utility, which could trigger a notification to the current supplier (NG) and inadvertently reveal the client's termination plans. - **Mitigation Strategy:** To mitigate this risk, the team decided to have Alberto focus exclusively on loading the readily available non-EDI bill images into the system as a first priority, holding off on any portal creation that isn't definitively safe. This ensures immediate, visible progress while a safer, more informed strategy for portal setup is developed. ## Resource Allocation and Priority Management A key discussion point was ensuring that resources are focused on the right tasks to maximize progress. It was confirmed that Alberto should immediately begin processing the 5,000+ already-scraped non-EDI invoices. To facilitate this, he will be granted access to the folder containing these files and will require training on the internal system (CEP) for handling Constellation bills. The team is actively working to provide him with the necessary access and documentation. ## Progress Tracking and Next Steps A tracking mechanism will be established using the master account sheet to monitor progress meticulously. This will include columns to mark accounts as loaded or identified as EDI, providing a clear view of what has been accomplished. The immediate action items are to send a kick-off email to the onboarding team and provide Alberto with access to the first batch of files to begin loading. Additionally, a separate list of 17 accounts with no available image links will be sent to the client for resolution.
UBM Planning
## Summary The meeting primarily focused on addressing a critical database performance crisis affecting multiple applications, while also reviewing progress on key development initiatives and introducing new infrastructure monitoring capabilities. ### Critical Database Performance Issues A severe database performance degradation was identified, causing widespread application crashes and timeouts across multiple systems. The issue manifested as frequent "wait operation timeout" errors in applications like FDG Connect and DDIs, with logs showing over 55,000 calls to specific stored procedures within a single hour. Investigation revealed several concerning metrics: one particular update operation on the DSS bill table accumulated 7,047 seconds of runtime in just one hour, while the `FN_get_processing_due_date_three` stored procedure was executed approximately 100,000 times during the same period. The team ruled out recent infrastructure changes or script deployments as potential causes, noting that scaling configurations remained unchanged with three instances running normally. ### Technical Investigation and Root Cause Analysis Immediate diagnostics were launched using SQL Server Management Studio (SSMS) to identify problematic queries and resource bottlenecks. The investigation focused on several key areas: examining long-running queries that consumed significant database resources, analyzing queries with abnormally high execution counts, and checking for potential looping mechanisms in recently deployed code. Database metrics showed surprisingly low memory consumption at around 10%, eliminating memory pressure as the culprit. The team specifically examined a clustered index update operation that appeared to be causing table resorting, though no recent index changes were made to the build table. ### UBM Output System Enhancements Significant progress was reported on converting the UBM output system from batch multiple bills to single bill processing. This major architectural change requires substantial modifications to both the UBM output layer and underlying output code structures. The conversion is estimated to require several additional days of development work. Additionally, plans were discussed to transition from CSV to JSON format for bill information transmission to UBM, leveraging their newly added JSON acceptance capability. This format change would represent a more modern approach to data interchange. ### Infrastructure Monitoring and Visualization A new monitoring infrastructure using Grafana was introduced as a replacement for existing visualization tools. Grafana offers enhanced capabilities for capturing and analyzing application logs, providing more powerful visualization features compared to current solutions. The team discussed integrating error logging from file systems into Grafana to enable better monitoring of application issues and performance metrics. This represents a shift toward industry-standard monitoring practices that could provide deeper insights into system behavior and faster problem resolution. ### Cross-Team Collaboration and Support Coordination efforts were underway to assist with Asian migration tasks, with team members offering to help offload work from colleagues involved in the migration project. Graphics and visual assets were also submitted for review, though specific details about their purpose weren't elaborated during the discussion. The team emphasized maintaining communication channels for rapid response to the ongoing database crisis, with members available for immediate consultation if needed during the investigation.
Arcadia Integration Plan
## Summary ### Arcadia Integration Strategy The primary focus of the meeting was planning the integration of a new utility data provider, Arcadia, into the existing data processing system. The core strategy involves initially processing Arcadia's data via PDFs to minimize risk, as the logic for sending data to the ultimate destination (UBM) is not yet fully stabilized. A secondary, parallel path involves developing a script to convert Arcadia's provided JSON into the required DSS JSON format for future use. A significant architectural decision to be made is whether the system will "pull" data from Arcadia or if Arcadia will "push" it, with a preference for a setup similar to the existing "file watcher" model used for other vendors. ### Technical Implementation and Workflow A major technical hurdle identified is the need to enable the DSS (Data Services System) to accept and manage different workflows for different data sources. This is anticipated to be the most substantial initial development effort for the Arcadia integration. The process involves fetching data from Arcadia's API, which returns both a JSON and a path to a PDF, and then processing these files through a dedicated pipeline. The goal is to have a clear, tested process ready so that when Arcadia is contractually ready, the technical onboarding can be executed swiftly, tracking all steps from data fetch to final processing. ### Credential Management Overhaul A critical and parallel issue discussed is the current fragmented state of credential management across systems like Smartsheet and DSS. This causes operational confusion and potential data errors. The plan is to consolidate this into a single, authoritative system, likely a new database table, that syncs with all other systems. This new system would require that every set of credentials is explicitly linked to a specific customer and vendor, eliminating guesswork and ensuring data integrity. This overhaul is considered a prerequisite for a stable and scalable operation, irrespective of the Arcadia integration. ### Customer Identification and Data Routing A persistent problem affecting current operations is the failure to properly identify which customer an incoming bill belongs to, leading to "unmatched" bills. This occurs when files uploaded via FTP or email do not adhere to a strict naming convention that includes client code and account number. For the Arcadia integration, the solution is to anchor customer identification at the credential level; since the system uses specific credentials to fetch data for a specific customer, the resulting bills can be automatically associated with the correct client, bypassing the unreliable file-naming step. ### Vision for Future System and Onboarding A forward-looking vision was presented for a new, dynamic onboarding and reporting system. This system would feature a live Gantt chart to track customer onboarding and ongoing service SLAs, making bottlenecks and responsibilities transparent. Reporting would be dynamically tied to CRM data, ensuring metrics like progress toward goals are always based on the correct, scoped numbers. The entire process-from ingestion and data extraction to payment processing-would be tracked, allowing for quick identification of exactly where in the chain a failure occurred for any given customer. ### Operational and Sales Requests A separate operational request from the sales team to set up a demo customer account in the production environment was addressed. This was deemed a significant effort that would require involvement from multiple teams. Given the lack of a formal contract with the prospective client, the request was prioritized as non-urgent and is likely to be deferred, as it would distract from core development and integration work.
[EXTERNAL]UBM Demo and planning
## Summary The meeting centered on prioritizing urgent operational tasks, with a significant portion dedicated to handling a sales-driven request for a UBM demo. Other critical discussions involved resolving systemic billing issues affecting a large volume of accounts and addressing a technical problem with late fee processing. ### UBM Demo Request for Evolution Broker The team evaluated a request from the sales team to generate demo bills for a potential broker client, Evolution, who is considering a contract. The request was deemed non-priority due to the lack of a signed contract and the high operational workload at the end of the month. - **Logistical Hurdles and Manual Effort**: Processing the demo bills is not a simple automated task; it requires significant manual setup from both Data Services and UBM Ops teams. This includes creating the customer account, mapping locations, and loading the bills, which constitutes a "decent amount of effort." - **Proposed Compromise and Timeline**: Instead of processing all 36 requested bills, the team agreed to a compromise of generating 10 sample bills. It was concluded that fulfilling the request for a demo the following day was not feasible. The plan is to push back on the urgency and handle the request the following week, with a specific process involving a team member named Alberto for Data Services setup. ### Ascension Account Bill Processing A major operational task was highlighted concerning the Ascension account, where a large number of bills are currently frozen and require immediate attention. - **Scale of the Issue**: There are nearly 600 bills that need to be unfrozen and re-parsed. This is a direct consequence of an issue with "current charges" that caused the bills to be frozen in the first place, indicating a significant data processing backlog. ### Late Fees Integrity Check Error A technical issue was identified where late fees are creating errors within the system's integrity checks, preventing bills from being processed correctly. - **Root Cause Analysis**: The problem stems from late fees being generated as separate, standalone blocks rather than as line items within existing bill records. This incorrect data structure causes the system's integrity checks to fail because it cannot map these isolated late fee blocks to a corresponding bill. ### Team Workload and Schedule Adjustment A decision was made to manage the team's capacity in light of the current workload and priorities. - **Meeting Cancellation for Focus Time**: The next scheduled meeting was canceled to give team members dedicated time to work through the existing queue of tasks, including the high-priority items discussed. This underscores the high-pressure environment and the need for focused execution. - **Victra Account Status**: A brief check-in confirmed that the Victra account was in a good state with no immediate issues, allowing the team to concentrate on the more pressing matters.
DSS Daily Status
## Summary The meeting focused on prioritizing and assigning technical tasks to improve the efficiency and stability of the DSS system. A primary topic was the need to properly document and add notes for all automated processes that send bills to various data stages, such as account setup and audit data, to ensure clarity and prevent issues. This includes documenting existing logic for duplicate resolution and charge discrepancies. Progress was noted on several fronts, including the resolution of a unit of measurement error affecting numerous bills and the completion of the Simon onboarding process for file exchanges. The discussion also emphasized the importance of cleaning up the backlog of tickets in the "to do" and "in progress" columns to ensure the system runs smoothly. There was a strong emphasis on setting up a local development environment for FDG Connect to enable work on new tasks, and a plan to reassign some existing report work to free up capacity. ## Wins - Successfully pushed a fix for the unit of measurement error affecting 1,400 bills. - The Simon onboarding process for file exchanges has been completed and is confirmed to be working as expected. - An automated retry mechanism for DP failures is already in place and functional. ## Issues - A need to fully explore and document every stage where bill data is automatically sent to DDS, as some processes like duplicate resolution were not fully documented. - The "to do" column contains a backlog of "paper cut" tickets that need to be prioritized and addressed. - The double-counting lines issue for line items remains unresolved and requires investigation. - The FTP folder setup for the Simon onboarding process is still pending. - The local setup for FDG Connect has not yet been completed, which is blocking work on related tasks. ## Commitments - **Owner**: Explore and document every automated step where bill data is sent to DDS to ensure notes are populated correctly. - **Owner**: Work on adding notes to bills sent to the "account setup" stage, specifically for summary bills. - **Owner**: Investigate and resolve the double-counting lines issue. - **Owner**: Complete the local setup of FDG Connect to begin work on assigned tasks. - **Owner**: Take on some reassigned report work to help with workload distribution.
Invoice Sharing Plan
## Summary The meeting primarily focused on resolving several operational and technical questions regarding file storage systems, document processing, and data loading procedures for a specific client project. Key decisions were made on the chosen method for handling invoice data transfers and clarifications were provided on external paperwork. ### File Storage Solution for Invoice Data The central topic was determining the best method to transfer a large set of invoices, with a decision made to proceed with Google Drive. The discussion involved comparing different technological solutions and understanding the client's specific needs. - **Selection of Google Drive over FTP:** The consensus was to use Google Drive for transferring the invoice set, as it was deemed the most straightforward solution. This decision was influenced by the need to accommodate a broker in the process who had complained about file size limitations with other methods like zip files. - **Client Precedent and Setup:** It was noted that another entity, Ascension, had previously chosen Google Drive over FTP, though the specific reasons were not recalled. The action item is to create a shared Google Drive folder and confirm with the relevant team (Afton's team) that they can access it. - **Purpose and Scale:** The solution is specifically for a large set of invoices that are too bulky for a zip file, confirming the data transfer is not for live bills but for a batch processing need. ### FBO Paperwork and External Queries Clarification was provided on how to handle external documentation, specifically FBO paperwork, for a user named Simon. - **Document Completion Process:** It was confirmed that the FBO paperwork is sent via DocuSign, allowing the recipient to complete it digitally by entering information into the fields. The alternative of manually printing, filling out, and emailing the form was also confirmed as a viable, though less efficient, option. ### Data Loading and System Processing Updates were given on the status of data loading tasks and a broader strategy for processing different types of data files. - **Extension and Caskey Data Load:** There was an issue with file rejection during a data load, which is being addressed by breaking the file into smaller batches of approximately one thousand records at a time. The goal is to complete this loading process by the end of the day. - **Strategy for EDI and Non-EDI Files:** A significant decision was confirmed to process all remaining files-both EDI and non-EDI-together. A previous batch of 5,000 files has been processed, with an additional 23,000 files remaining. The plan is to load everything into a single system, with the downstream team (Afton's team) taking responsibility for differentiating between the two file types. This strategy is pending final confirmation in a meeting scheduled for the following day. ### Upcoming Deliverables and Client Reporting The conversation touched upon upcoming tasks and a discrepancy in a client report deadline. - **Onboarding Plan and Data Batches:** A meeting is scheduled for the next day to discuss the onboarding plan, and the necessary data batches have not yet been sent to the Data Science (DS) team. - **Tableau Report Timeline:** A concern was raised regarding a Tableau report for the Ascension project. The developer communicated a Wednesday completion date, whereas the client was expecting it by Friday, requiring further investigation to understand and align on the timeline.
PayClearly Sync Update
## Payment Reconciliation and Issue Resolution The meeting focused on reviewing and resolving outstanding payment-related issues, primarily concerning data synchronization between systems. A significant portion of the discussion was dedicated to a list of approximately 400 payments that required review. The majority of these were found to be non-issues, consisting of $0 payments or negative amounts that did not require actual payment processing. The remaining payments were either sitting in the UBM system or were unprocessed; these were subsequently submitted for payment via a file sent to PayCleary, which was successfully loaded. The immediate action is to ensure PayCleary prioritizes processing these payments. Additional tasks were assigned for review, including: - **Updated Location Mapping:** An updated list for mapping locations is expected, and it requires review once received. - **Past Due Amounts:** Specific past due amounts from 2015 and 2016 need to be examined to resolve these longstanding items. - **Virtual Accounts:** Unmapped virtual accounts also require attention as part of the cleanup process. A broader, more automated method for identifying such payment issues is under development, but manual review of the identified items remains a critical step to ensure accuracy in the interim. ## Data Synchronization Challenges with PayCleary A central and critical topic was the data synchronization process between PayCleary and the internal UBM system. The current system is failing to reflect the true status of payments from PayCleary in a timely and accurate manner, which is causing reporting discrepancies and operational inefficiencies. The core of the problem is an implementation limitation where the system only checks for status updates on payments that were created within the last 10 days. This is insufficient because payment statuses, such as a check being cashed, can change long after this 10-day window-sometimes up to 40 or 50 days later. Consequently, payments that are marked as "Complete" in PayCleary may still appear as "Processing" in UBM, leading to confusion and inaccurate data for teams like Ascension that rely on UBM for their information. The primary goal is to modify this process to pull data in a more real-time fashion and ensure that the status of *all* payments in UBM accurately reflects their current status in PayCleary, regardless of the payment's age. A proposal to extend the update window to 30 days was mentioned, but the ideal solution is to move towards updating everything. ## Impact on Reporting and Future Capabilities The inaccuracies in the payment data have a direct and negative impact on downstream reporting and business intelligence efforts. The data pulled from PayCleary serves as the foundation for several Power BI reports. Because this source data is currently unreliable, these reports cannot be fully trusted or utilized effectively. Once the data synchronization issue is resolved, it will unlock significant reporting capabilities. With accurate and timely data, the team will be able to build valuable new reports, such as: - **An uncashed check report:** This would automatically identify checks that have been issued but not yet cashed, improving cash flow management. - **Payment data at the billing account level:** This would provide more granular insights into payment behaviors and account statuses. Currently, to circumvent the unreliable automated data pull, a manual process is in place where someone extracts data directly from PayCleary every morning and distributes it, highlighting the urgency of fixing the underlying system issue. ## Other Ongoing Operational Tasks Beyond the major data sync problem, the conversation touched upon other operational workstreams. The resolution of the payment clearance information from PayCleary was confirmed to be an internal responsibility, not an issue on PayCleary's end. The discussion also confirmed that work on the unmapped virtual accounts and the review of past due amounts would be addressed imminently. A follow-up with another team member is planned to understand the technical constraints behind the current data sync limitations and to chart a path forward for both short-term and long-term solutions.
Report
## **Summary** The meeting primarily focused on reviewing a workflow status report for invoice processing and troubleshooting a project setup issue. A significant portion was dedicated to understanding the meaning and sequence of various workflow status columns in the reporting system, with the goal of creating a comprehensive visual map of the entire invoice lifecycle. Additionally, progress updates were given on two key development tasks. ### **Project Setup and Access Issues** A project labeled "FDG" has been set up successfully, but a UI display issue persists where headings and components are not visible, mirroring the problem observed on the live server. During the call, attempts to diagnose this issue revealed a potential permissions or access problem, as the application failed to load correctly even after confirming VPN connectivity and authentication. The resolution of this access issue was delegated for follow-up with another team member. ### **Archiving Functionality Development** The implementation for archiving files from prior to the current year is complete and currently undergoing validation. This process involves complex queries across both Cosmos and DDS databases. It was confirmed that the setup for a separate, monthly archiving process is not yet in place, but this was deemed a lower priority that could be addressed later. ### **Analysis of Operational Issues** Two recurring operational problems were highlighted for resolution. The first involves "unknown vendors," where a source of data leakage has been identified, providing confidence that there are no other underlying causes. The second issue concerns "do not use meters," which appears to be a documentation-related problem that has resurfaced despite previous fixes. The emphasis was on compiling a definitive list of these issues to address them comprehensively and prevent future recurrences. ### **Deep Dive into Workflow Status Report Columns** The core of the meeting involved a detailed walkthrough of a workflow status report to understand the meaning and sequence of each column, which represents a stage in the invoice processing pipeline. - **Client, Batch, and Original File Name:** These columns represent the hierarchical structure of incoming data, starting with the client, then the document batch uploaded by the user, and finally the individual original files within that batch. - **Bills Column:** This shows the total number of bills currently in the processing system, specifically excluding those in the "Ready to Send" status, providing a snapshot of active work. - **Account Setup:** This status indicates bills that cannot be matched to an existing account in the system, requiring a new account to be created before processing can continue. - **DSS IPS Column:** This is a consolidated column that encompasses multiple data processing steps, including Optical Character Recognition (OCR) and the LLM (Large Language Model) service, representing the core automated extraction and interpretation phase within the DSS system. - **Identification of Deprecated Workflows:** The discussion uncovered at least two workflow statuses-"Map Vendor" and a separate "OCR Service"-that are believed to be deprecated and no longer part of the active process, as their functions have been integrated into the DSS IPS phase. This highlighted a need to clean up the reporting view to remove historical or unused steps. - **Data Acquisition:** The purpose and current activity within this status column were unclear, requiring further clarification with the operational team to understand if it involves manual data downloading or another function. - **Audit Data:** This status represents bills that require human review before they can be sent, a step that exists outside the main DSS system, with a desire to eventually migrate this function into DSS for operator efficiency. - **Duplicate Resolution and Ready to Send:** These final columns are self-explanatory; "Duplicate Resolution" flags bills identified as duplicates, and "Ready to Send" represents bills queued for dispatch, which can sometimes get stuck due to failures in the output service. ### **Actionable Follow-ups** To solidify the understanding of the workflow, a key action item was assigned to compile a reference list mapping each status column to its corresponding workflow ID and a brief description. This will aid in building a complete visual representation of the invoice lifecycle and in cleaning up the report by confirming which workflow statuses are active versus deprecated.
Abdullah Call
## حجم السوق والفرص المحلية والعالمية تم مناقشة حجم السوق المستهدف بالتفصيل، مع التركيز على السعودية ودول الخليج. حيث يُقدر حجم السوق المحتمل بحوالي **65 مليون دولار**، مستنداً إلى عدد المؤسسات والشركات الناشئة التي تتزايد باستمرار. وأُشير إلى وجود أكثر من 4.3 مليون مؤسسة في المنطقة، مما يؤكد على وجود فرصة كبيرة. كما تم التأكيد على أن السوق **كبير جداً** ومتنامٍ، مع ظهور مؤسسات جديدة بشكل يومي. ومع ذلك، تم طرح تساؤل حول ما إذا كان التركيز يجب أن يقتصر على السوق المحلي أم التوسع عالمياً، حيث أن توطين الأفكار للمنطقة قد يحد من النمو مقارنة بتقديم منتج قابل للتسويق globally. ## استراتيجية المنتج والتموضع التنافسي تم تسليط الضوء على التحدي المتمثل في تحقيق **Product-Market Fit** محلياً مقابل عالمياً. نوقشت أهمية عدم الاقتصار على تقديم "نسخة محلية" من الخدمات، بل العمل على تطوير منتج يمكن أن ينافس على مستوى العالم. وتمت الإشارة إلى أن البدء بمشروع محلي قد يكون خطوة أولى ضرورية لاكتشاف هذا الملاءمة قبل التوسع. كما تمت مناقشة فجوة في السوق، حيث أن العديد من المؤسسات المحلية تواجه صعوبة في تنفيذ عمليات معينة يدوياً، مما يخلق فرصة لحلول أكثر كفاءة. ## دور الذكاء الاصطناعي في أتمتة العمليات والعلامة التجارية تم تناول موضوع **الذكاء الاصطناعي** كأداة حاسمة لتحقيق الاتساق والكفاءة في العمليات، خاصة في المجالات التي تعتمد على الإدخال البشري مثل العمليات التسويقية وإدارة العلامة التجارية. وتم ذكر مثال عملي حيث تم استخدام الذكاء الاصطناعي لتحليل وإدارة **هوية العلامة التجارية**، من خلال إنشاء قاعدة معارف تحدد شخصية البراند بناءً على مدخلات محددة (مثل 10 صور و 10 قرارات). هذه الأداة ساعدت في حل مشاكل **الاتساق** في المحتوى النصي (الكابشن) والإنتاج، وأظهرت كيف يمكن للذكاء الاصطناعي أن يحل محل العمليات اليدوية المعرضة للخطأ والتباين. ## نماذج التسعير والعملاء المستهدفين تم شرح نموذج تسعير يعتمد على نظام **الرصيد (Credits)**، حيث توجد باقات مختلفة تقدم 20 أو 45 أو 100 رصيد. وتم الكشف عن وجود عملاء حاليين، منهم من اشترك في باقة 100 رصيد شهرياً، وآخرون في باقة 20 رصيد سنوياً. تمت مقارنة هذا النموذج من حيث التكلفة والفعالية مع تكلفة توظيف مصمم جرافيك أو كاتب محتوى، حيث أن الحل المقدم يغطي مجموعة أوسع من الخدمات (التصميم، الفيديو، الموسيقى، الكتابة) بتكلفة قد تكون أقل. كما تمت مقارنة الأسعار مع ما تقدمه الوكالات المحلية، التي قد تتقاضى آلاف الريالات مقابل حزمة محدودة من الخدمات. ## التحديات والفرص في السوق المحلي تم تسليط الضوء على تحدٍ رئيسي يتمثل في افتقار بعض الشركات الدولية أو الإقليمية التي تدخل السوق السعودي للفهم الدقيق **للثقافة والخصوصية المحلية**. هذا يخلق فرصة للجهات المحلية التي تملك هذه المعرفة لتقديم خدمات أكثر دقة وملاءمة. وتم ذكر أمثلة على أخطاء ترجمية أو ثقافية ترتكبها شركات غير سعودية تؤثر على مصداقيتها. بالإضافة إلى ذلك، تمت مناقشة التحدي الذي يواجه **المستقلين (الفريلانسرز)** فيما يتعلق بعدم الاتساق في جودة العمل أو الالتزام، مقارنة بالشركات المنظمة. ## التوجه الاستراتيجي وآليات العمل المستقبلية تم التأكيد على أن القواعد التقليدية للتأسيس والتوظيف قد تغيرت، حيث ظهر مفهوم **Solo Founder** وأصبح ممكناً بفضل الأدوات الحديثة. ومع ذلك، تم التشديد على أن هناك عنصرين لا يمكن أتمتتهما بالكامل ويظلان حاسمين: - **التفاعل البشري مع العميل**: حيث أن العلاقة الشخصية والإنسانية مع العميل لا غنى عنها. - **مراقبة الجودة والرقابة**: ضرورة وجود شخص (مثل Account Manager) لمراجعة العمل وضمان مطابقته لمعايير الجودة قبل تسليمه للعميل. كما تمت الإشارة إلى إمكانية تطوير أدوات داخلية تعتمد على الذكاء الاصطناعي لخدمة العملاء بشكل أكثر تخصيصاً وكفاءة، بدلاً من بيع الوقت بشكل تقليدي.
كراسة والرفيق المؤسس
## العميل المحتمل يمتلك الخلفية الأساسية في قطاع التكنولوجيا المالية (فنتك) حيث عمل فيه لفترة طويلة واكتسب خبرة واسعة. خلال هذه الفترة، تعلم دروساً قيمة عديدة لكنه قرر مؤخراً طي هذه الصفحة والانتقال إلى مرحلة جديدة تماماً تتمحور حول عالم الشركات الناشئة (ستارتوب). الدافع وراء هذا التحول هو إدراكه أن مشاريعه الجانبية كانت متسقة وأكثر استدامة مما اعتقده في السابق، مما دفعه إلى تأسيس مشروعه الحالي، "كراسة". يتمثل شغفه الأساسي في فهم الشركات بشكل شامل من خلال قراءة كل شيء عنها، وهو ما يحاول تحقيقه من خلال خدمته الجديدة. واجه تحديات سابقة في التعامل مع الوكالات والمستقلين، حيث شعر أنهم لا يفهمون احتياجاته الحقيقية، مما عزز قراره بإنشاء نموذج خدمة مختلف. ## الشركة تركز "كراسة" على تقديم خدمات براندينج شاملة للشركات، تشمل إنشاء Landing Pages، وبيتسليك، ومواقع إلكترونية، ومحتوى، وتصميم. الفلسفة الأساسية للشركة هي تقديم هذه الخدمات كحزمة متكاملة دون الحاجة إلى التعامل المباشر مع العملاء بشكل فيزيائي، حيث تهدف إلى إلغاء الوساطة التقليدية للوكالات والمستقلين. تتبنى الشركة توجهين تكنولوجيين حديثين: الأول هو "فلساك AI" الذي يركز على استخدام الذكاء الاصطناعي لتسريع عمليات المشاركة، والثاني هو "النتيجة كخدمة"، بهدف توفير حلول جاهزة للمؤسسين توفر عليهم قضاء ساعات طويلة في تطوير أفكارهم. تستهدف "كراسة" بشكل رئيسي العملاء المتواجدين رقمياً والذين يمكن الوصول إليهم عبر الإنترنت، مع التركيز الحالي على الشركات الناشئة. ## الأولويات - **إصلاح نموذج الخدمة:** معالجة الإشكالية الموجودة في التعامل مع الوكالات والمستقلين التقليديين من خلال تقديم خدمة موحدة وشاملة تغني عن الحاجة لهم. - **التركيز على الشركات الناشئة:** استهداف قطاع الشركات الناشئة كشريحة أساسية في الوقت الحالي، مع إمكانية التوسع لاحقاً ليشمل شركات أكبر. - **الاعتماد على التكنولوجيا:** تسخير الذكاء الاصطناعي وأتمتة العمليات لتقديم خدمات أسرع وأكثر كفاءة، مثل "فلساك AI" و "النتيجة كخدمة". - **تعزيز التواجد الرقمي:** الاهتمام ببناء تواجد قوي على منصات التواصل الاجتماعي والإنترنت بشكل عام، باعتباره قناة التواصل والتسويق الأساسية مع العملاء المستهدفين. - **التركيز على المشاكل القابلة للحل:** اختيار المشاكل التجارية البسيطة والواضحة والتي يمكن حلها بشكل فعّال، بدلاً من المشاكل المعقدة التي يصعب إثبات حلها.
DSS Daily Status
## Summary The meeting focused on addressing critical issues causing approximately 700 invoices to be stuck in the DSS (Data Submission Service) queue, primarily due to two distinct root causes. A deep dive was conducted to understand the failure mechanisms, assess the current state, and outline the necessary corrective actions. ### Root Cause Analysis: Failed Invoice Reprocessing A significant issue identified was the lack of an automated system to handle invoices that fail initial processing. This has led to a large backlog that requires manual intervention to clear. - **Missing Automated Reprocessing:** It was confirmed that while a method for automated reprocessing exists in the code, it has been commented out and is not operational. The proposed solution is to review, refine, and reactivate this mechanism. - **Proposed Implementation Schedule:** The plan is to implement an automated nightly job to reprocess failed invoices, with a suggested execution time of 2:00 AM EST. This is considered a high-priority task that needs to be completed by the following week. ### Investigation into Stuck Invoices The discussion involved a live analysis of the DSS queue to quantify the problem and identify specific error patterns. The initial figure of 700 stuck items was refined to a more precise count of 624 invoices specifically failing in the DSS pre-audit stage. - **Quantifying the Problem:** A real-time query was executed against the system to pull all current error messages from the failed DSS audit workflow, providing a clear dataset for analysis. - **Discovery of Critical Failures:** The investigation revealed alarming instances of live bills from major clients (e.g., Victor, Constellation) failing the pre-audit process, indicating a systemic issue affecting real-time operations. ### Critical System Flaw: Client Identification Logic The most severe problem uncovered is a fundamental gap in the system's logic for identifying the client associated with invoices uploaded via the File Watcher (FTP), leading to a high rate of failures. - **Systemic Failure for File Watcher Uploads:** It was determined that the system possesses **zero logic** to determine the FDG client for bills ingested through the File Watcher. This means any invoice uploaded via this method is almost guaranteed to fail the DSS pre-audit with errors like "Unable to identify client." - **Immediate Operational Impact:** This flaw renders the File Watcher upload path functionally unusable for client billing until the underlying logic is implemented, as these invoices become permanently stuck in the queue. - **Potential Workaround:** A potential alternative for clients is to use the bulk upload functionality within FDG Connect, which may not be subject to the same identification issue. ### Action Plan and Communication Strategy A two-pronged approach was formulated to address the immediate crisis and prevent future occurrences, involving both technical fixes and stakeholder communication. - **Technical Verification:** An immediate action item is to confirm the existence and effectiveness of any client identification logic that parses information from the file names of uploaded invoices, as this was suggested as a potential but unverified existing feature. - **Stakeholder Communication:** Once the technical details are confirmed, a communication must be sent to the relevant team (e.g., Afton's team) to inform them that invoices are failing because the expected file naming scheme or client identification protocol is not being followed. This will help mitigate the inflow of new problematic invoices.
Weekly DSS call
## Summary ### The Core Problem: Unknown Vendor Customers The central issue discussed was the occurrence of "unknown" vendor customers in the system. This problem arises when DSS fails to process a bill for any reason, preventing the system from correctly identifying and setting the vendor customer associated with that bill. A significant concern is that these "unknowns" do not trigger any alerts or appear in standard operator workflows, making them difficult to detect without manual intervention. ### Root Cause Analysis The primary source of these unknown records was traced back to the process of uploading bills through the FDG Connect platform, specifically when using the download helper feature. It was noted that during this upload method, information was not being retained properly before being sent to DSS, leading to the identification failures. While a recent fix was deployed to address duplicate bill issues, the fundamental challenge of identifying the vendor customer before DSS fully processes the bill remains a systemic design consideration. ### A Short-Term Monitoring Solution To address the immediate need for visibility, a new monitoring tool was introduced. This is a modified version of the system status report within Power BI, which now includes a dedicated view for tracking unknown vendor customers. This new report provides a queue-based overview, showing approximately 2,100 records currently stuck in the "account setup" phase, with smaller numbers in other queues like DSS IPS. The report allows team members to look up problematic batches using the provided document batch IDs, enabling manual investigation and resolution. ### Investigation and Cleanup Strategy Initial analysis of the unknown records suggests that a large portion are likely duplicates, many dating back to July and September, which have probably been repulled and reprocessed successfully since. The strategy for handling these involves a manual review process, with an expectation that most will be deleted from the system. The team plans to incorporate a daily check of the new report into their routine to proactively manage this issue moving forward. ### Long-Term Considerations and Next Steps The discussion concluded with an acknowledgment that while the new report provides a crucial short-term solution for monitoring, a more robust, long-term fix is needed to "stop the leakage" and prevent unknown unknowns from entering the system in the first place. The underlying design decision regarding how and when vendor customer information is populated from DSS was flagged as an area requiring further discussion and potential redesign.
Missing Bills Report
## Summary ### Establishing the Baseline Invoice Count The meeting centered on finalizing and validating the correct baseline number of invoices for the utility billing platform. The confirmed figure is **2,755**, which includes some quarterly bills. This number is critical as it serves as the benchmark for all future billing cycles; any deviation from it signals a problem in the invoicing or payment process. The team is working to ensure this number is accurately reflected across all systems. - **Platform Updates and Duplicate Resolution:** A significant effort is underway to update the platform and resolve duplicate entries. A list of duplicates has been identified and is being actively worked on to either close the accounts or update the platform, with the goal of ensuring that accounts marked for "Ops Review" are not just flagged but are actually resolved and removed from the system. ### Urgent Resolution of Missing Invoices A critical issue involves approximately 400 invoices, with a subset of 55 for which invoices have never been received. Immediate action is required to resolve this today to prevent utilities from falling into arrears next week. - **Utilization of Mock Bills:** As a temporary measure, mock bills have been submitted for many of these accounts to facilitate payment and avoid service disruptions. However, there is a strong emphasis on solving the underlying root causes. - **Root Cause Analysis:** A deep dive is needed to understand why these invoices are missing. Potential causes include a lack of credentials, reliance on paper bills, or accounts that are new and will not appear on portals for several billing cycles. The team is committed to identifying the specific reasons for each missing invoice to prevent recurrence. ### Reconciliation of Past Payments and Vendor Reporting An analysis is being conducted to reconcile all payments made over the last three months (August, September, and October) to ensure 100% coverage for all accounts. The goal is to identify and resolve any variances, such as instances where an estimated bill was paid instead of the full amount. - **Vendor Request for Mock Payment Report:** A request from the client, Victor, for a report detailing all mock payments by vendor and amount was discussed. This report is anticipated to reveal a high percentage of mock bills versus actual bills, which could raise questions about the overall control and management of the billing process. - **Payment Method Corrections:** For the client Victra, 16 payments are being canceled and reissued via ACH, with an additional 130 payments scheduled for completion by the end of the day. ### Challenges with Vendor Onboarding and Credentials A major operational challenge identified is the high rate of check payments (approximately 40%) for the Victra account, which is believed to be a symptom of credential synchronization issues. When payment credentials are not available, the vendor defaults to issuing paper checks, leading to delays. - **Credential Sharing and Security Compliance:** The team is exploring solutions to ensure the vendor has the necessary credentials. Options include adding credential information directly to the payment files or improving the frequency of system synchronization. A significant consideration is the **security and compliance (SOC) implications** of sharing credentials, which may require implementing new controls even if it adds operational steps. - **Streamlining the Process:** The hypothesis is that by ensuring the vendor has complete and up-to-date credentials, the reliance on checks will drastically decrease. An audit is needed to verify what credential information the vendor currently has access to in their system. ### Resolution of Executive Sign-Off for Utility Access A roadblock remains in obtaining the necessary Letter of Authorization (LOA) for certain utilities, which requires an executive-level signatory. The vendor is attempting to find a workaround to avoid escalating the request to their CFO. - **Exploring Alternative Signatories:** The possibility of having another company officer, such as "Vinnie," sign the document is being explored to circumvent the need for CFO approval and expedite the process. Maintaining a clear paper trail of all LOA submission attempts is emphasized as a critical best practice.
EDI Reconciliation Plan
## Summary ### The Core Challenge: Identifying EDI vs. Non-EDI Accounts The central issue discussed revolves around a significant data discrepancy that complicates the process of identifying EDI (Electronic Data Interchange) accounts within a large master list provided by the client, Simon. The client supplied a master list of 8,823 accounts, along with a separate list of 2,457 accounts they claim are EDI. However, attempts to reconcile these lists have revealed major inconsistencies, creating a bottleneck for the onboarding process. ### Data Discrepancy and Reconciliation Issues A critical problem is the mismatch between account numbers on the different lists provided by the client. When attempting to match the 2,457 EDI accounts against the master list, only around 849 accounts were found to match directly. This leaves a gap of approximately 1,300 accounts from the EDI list that cannot be reconciled with the master account numbers. The discrepancy is attributed to different formatting of account numbers between EDI and non-EDI accounts, such as the presence or absence of dashes and spaces, which prevents a straightforward automated matching process. ### Evaluating Potential Solutions The discussion explored several potential paths forward to resolve the data impasse and continue with the account onboarding. - **Requesting Clarified Data from the Client:** One proposed solution was to go back to the client, Drew, and request that they provide a pre-sorted list or the specific information needed to accurately slice the data. However, there is a strong concern that this request would be poorly received, as the client has already provided the data and expects the team to handle the identification process, especially given the unusually high $40,000 onboarding fee. - **Manual Identification and Loading Process:** Another option considered was a manual, large-scale effort to load all accounts and identify the EDI accounts through the setup process. This would involve loading all accounts into the data services platform, having operators process the ones with physical bill copies, and then separately handling the EDI accounts. The primary drawback is the immense amount of manual labor and time this approach would require. - **Proceed with Loading and Handle Exceptions:** A hybrid approach was suggested: begin loading all available accounts immediately to demonstrate progress. The team would process the 7,412 non-EDI accounts first, while parking the identified EDI accounts. For the EDI accounts, operators would attempt to pull physical bill copies and maintain a separate tracking list. This method accepts that the full scope of EDI accounts will only become clear as the work progresses. ### Recommended Path Forward and Resource Implications A consensus emerged to move forward with the third option: initiating the loading process for all accounts without further delay. The immediate plan is to start with the 7,412 non-EDI accounts to show weekly progress, which is critical for maintaining the client's confidence. The team will handle EDI accounts as exceptions, with operators keeping a dedicated list of account numbers that are identified as EDI during the process. Given the sheer volume of work and the high stakes of this client engagement-which includes potential future commodity supply contracts-a strong recommendation was made to secure additional dedicated personnel to manage this onboarding effectively.
DDAS Manual Processed Invoices
## Data Validation and Invoice Discrepancies A critical issue was identified concerning significant data mismatches in processed files, specifically impacting invoice accuracy. The core problem stems from an incorrect column definition in a script run the previous day, which resulted in erroneous data outputs. A manual verification process was initiated to compare the files from the current day against the corrected versions from the previous day, revealing 124 specific invoices with discrepancies between key financial columns. The immediate plan is to isolate these problematic invoices by creating a new column that subtracts one value from the other; any result not equal to zero will flag an invoice for manual review. This is a high-priority issue as it has a direct and immediate impact on customer payments. ## Invoice Processing Workflow and Escalation The meeting highlighted significant bottlenecks in the invoice processing workflow, particularly concerning invoices from "Constellation." Frustration was expressed over the fact that three separate channels are supposed to be obtaining these invoices, yet failures still occur. A clear escalation path was defined: when invoices are not pulled in a timely manner, the issue should be escalated directly to specific senior individuals to demand immediate processing. The need to eliminate redundant efforts and hold the responsible channels accountable was emphasized to streamline this critical operational function. ## System Limitations and Reporting Halt Work on a key reporting piece has been completely halted due to unresolved, "unknown issues" within the "DSS" and "DDIS" systems. These system errors prevent the crucial action of marking customers with a specific status, which is a prerequisite for generating accurate reports. Consequently, all reporting dependent on this data is on hold until these systemic issues can be resolved, which is not anticipated until the following week. In the interim, the team will continue to rely on existing, static reports for their operational needs. ## Data Access and Centralization To improve collaboration and data accessibility, a decision was made to decentralize a key data source. Instead of its current location, the data will be moved to a shared platform like SharePoint. This move is intended to ensure that all team members have simultaneous and equal access to the information until a more permanent, integrated solution, such as a Power BI dashboard, can be developed and implemented. ## Cross-Functional Collaboration and Skill Gaps A need for better cross-functional collaboration was identified, specifically concerning the processing of invoices from the "gas" system. It was acknowledged that internal expertise for navigating this particular system is limited, creating a dependency on individuals from other departments. The plan is to leverage existing professional relationships with contacts who are proficient in the gas system to bridge this knowledge gap and ensure these invoices can be processed effectively.
Report
## Summary ### Investigating Unknown Customer Data Issues An investigation was launched into a critical data issue where certain customers are being classified as "unknown" within the system, which is suspected to be a primary cause of missing payments and subsequent service disconnects. The core problem is that these customers appear to be stuck in the system, invisible to the teams responsible for actioning their accounts, leading to billing failures. A specific hypothesis was put forward suggesting the issue might stem from a process conflict, such as a vendor connection being excluded from the normal data flow but still being picked up by a secondary data audit process, creating a duplicate or orphaned record. The investigation was deemed urgent, with a plan to sync later in the morning to establish a concrete action plan and identify the root cause. ### Addressing Payment and Reporting for Prepay Q4 A separate but related topic involved the processing and reporting for Prepay Q4 acquisitions. It was confirmed that some data pertaining to this period had been received and required attention. The discussion indicates a need to ensure that the reporting for this specific quarter is accurate and actionable, particularly in understanding the movements and statuses of prepay and postpaid customers to provide clear insights for the team. ### Power BI Report Editing Constraints The feasibility of editing Power BI reports was briefly discussed, highlighting a significant technical limitation. It was confirmed that editing these reports requires a Windows environment and a corresponding license, as attempts to do so on a Mac were unsuccessful and required the use of a virtual machine. This constraint means that, for the time being, report editing is limited to team members with access to the necessary Windows-based infrastructure. ### Processing of ADI and Non-EDI Data The status of ADI and non-EDI data processing was reviewed. While the data services system (DSS) is technically capable of handling account setups for this data, there is a perception that the operational team may lack the full knowledge to execute this process effectively. Consequently, these items may still be funneling through a manual workflow, indicating a potential gap between system capability and operational execution that needs to be addressed. ### Request for Payclearly API Key A final administrative point involved a request for a Payclearly API key, which is needed for reporting or a new dashboard. The established procedure for obtaining this key is to initiate a request through a dedicated Slack channel involving specific stakeholders, who would then facilitate providing the necessary access credentials.
Unknown Bills Fix
## Summary The meeting primarily focused on technical progress updates regarding system reports and data processing, followed by a strategic discussion on addressing systemic issues with unknown customer data. ### Duplicate Check Development Work is actively underway to develop a duplicate check mechanism for processed bills. The development involves creating a system to identify duplicate entries based on specific bill attributes: **invoice date, due date, and the amount due**. A task was noted as needing to be formally created to track this ongoing work. ### Unknown Vendor Bill Report Significant progress was made on a report designed to surface bills from unknown vendors. The core issue, which was preventing these bills from appearing, was identified and resolved. - **Root Cause and Fix:** A filtering condition within a procedure function was incorrectly excluding these bills. The logic was updated to ensure that "non-provider" or unknown vendor options now appear in the dropdown filters. - **Enhanced Data Visibility:** The procedure was further modified to include node records, which now allows unknown vendor client bills to be visible within the workflow steps view. - **Publishing and Permission Hurdle:** A separate issue prevents the report from being published successfully by the developer, which appears to be a **permissions-related problem** concerning the connection to the underlying data source. A workaround was identified where a user with admin access can publish the report without connection failures. ### Systematic Handling of Unknown Customers A deep-dive session was held to analyze the current state and usability of the unknown customer report, revealing critical operational bottlenecks. - **Current Data Snapshot:** The report shows a substantial number of bills (e.g., 2,700 in one category) that are stuck in the system, specifically within stages like "DSS and account setup." The major concern is that these bills **cannot progress through the workflow** because they are associated with unknown customers and thus are invisible to standard operational filters. - **Actionability and Usability Gaps:** The report in its current form lacks direct links to take action on these bills. It was questioned whether a link could be added for bills in the "DSS" stage, potentially requiring a query to Cosmos DB to retrieve the necessary details. - **Strategic Concerns:** The discussion culminated in a fundamental strategic question: how to make the report more actionable for the operations team and, more importantly, how to **systemically minimize the occurrence of unknown customers**. This points to a potential underlying issue with the current data intake or setup processes that needs to be addressed at a foundational level.
UBM Errors Check-in
## Summary ### Error Code Validation and Configuration The discussion centered on configuring specific error code validations for certain clients. A request was made to re-enable specific errors, identified as part of the "2.5 series," for a select group of clients rather than universally. It was confirmed that this targeted re-enablement is technically feasible and would be actioned upon receiving a formal email request, leading to the creation of a development story. Furthermore, the logic and current status of error `2045` were clarified: this error is triggered when a bill's past due amount is negative (indicating a credit) and is currently enabled but set to auto-resolve, meaning it does not trigger active alerts. A related error, `2024`, was noted to handle scenarios with a positive past due amount. ### Transitioning from Product Tier to Service Level Filters A significant usability improvement was proposed to replace the existing "product tier" filter with a "service level" filter within the system's search functionality. The current product tier filter was deemed largely irrelevant for operational tasks, particularly when teams need to distinguish between critical billing types like "prepay" and "postpay." It was confirmed that the necessary service level data is already available from HubSpot and stored locally, making this transition viable. The change aims to streamline workflows, especially for teams manually analyzing customer data in Excel, and will be formally confirmed with stakeholders before implementation. ### Power BI Reporting and Data Definition Challenges A substantial portion of the meeting was dedicated to addressing inconsistencies in Power BI reports used for tracking the bill processing lifecycle. The core issue identified was a misalignment in how key data columns are defined and interpreted by different users, leading to conflicting reports and conclusions. The discussion aimed to establish a single source of truth for metrics like "date bill received," "date loaded," and "date processed." **Key Data Definitions Clarified:** - **Received to Loaded:** This metric represents the time between when a bill is marked as available for download (from DSS/DDS) and when it is successfully loaded into the UBM platform. - **Loaded to Processed:** This covers the time a bill spends within UBM, from being loaded until it is processed and marked as ready for payment. - **Processed to Pay File:** This is the final step, capturing the time from when a bill is customer-ready to when the AP process generates the payment file. **Identified Data Gaps and Next Steps:** A critical data gap was acknowledged: the Power BI dashboard only reflects data *after* a bill is pushed to UBM. Information about bills that are still in the download queue, being processed in DSS, or stuck in the legacy DDS system is completely absent from these reports. This creates a blind spot in the complete bill lifecycle. The agreed-upon next step is to obtain specific examples where reports from different users do not match, enabling a step-by-step analysis to pinpoint the root cause-whether it's differing definitions, incorrect filters, or a true data discrepancy. ### Bridging Data Silos for Comprehensive Reporting The conversation highlighted the overarching challenge of creating a unified view of the bill lifecycle by bridging data from two separate systems: UBM and DSS/DDS. The current reporting structure fails to provide a holistic picture, as it lacks data on the initial stages of bill processing that occur before data is sent to UBM. The historical focus has been on optimizing the "loaded to processed" phase within UBM due to its manual components. The new goal is to develop a comprehensive report that tracks a bill from the moment it becomes available for download, through its entire journey in DSS/DDS, and finally through the UBM payment pipeline, which would require at least five distinct data points for complete tracking.
UBM Reporting - Automate for Internal MGT & Client Facing
## Customer The customer is a user of a utility bill management (UBM) and bill pay platform. Their role involves overseeing and managing the processing and payment of a large volume of utility bills for their organization. They have a background of using multiple UBM platforms from a client perspective, giving them a unique and practical understanding of what a user expects from such a service. They are deeply involved in the operational challenges and are focused on demonstrating the value of the service to their own internal stakeholders and clients. ## Success The most significant success with the product is its core function of processing a high volume and value of bills. For instance, the platform successfully processed $10 million for one client. This demonstrates the system's capacity to handle substantial financial operations and its essential role in the customer's accounts payable workflow. The ability to manage such scale is the foundational value proposition that justifies the service's use. ## Challenge The single biggest challenge is the **lack of reliable, accessible, and trustworthy data for reporting**. Manually compiling basic performance metrics-such as the number of bills processed, their total dollar value, and on-time payment rates-can take several hours per client. The data is inconsistent across different sources (e.g., the native UBM platform versus the payment vendor's system), leading to discrepancies and a lack of a single source of truth. This forces the team onto the defensive with clients, as they cannot easily counterbalance occasional late fees or disconnection notices with an overarching, positive narrative of performance and progress. ## Goals The customer's primary goals are centered on gaining data transparency and using it to improve client relations and internal operations. - To have immediate, push-button access to three fundamental metrics for any client: the number of invoices processed, the total dollar value of those invoices, and the on-time payment performance. - To "flip the narrative" from being defensive about individual errors to proactively promoting the overall good work being done. - To establish a clean, trusted dataset that serves as a single source of truth for both internal business management and client-facing communication. - To eventually develop automated, client-facing reports or an executive dashboard within the platform that displays these key performance indicators. - To track and demonstrate progress on specific operational projects, such as onboarding, by showing weekly progress metrics like the percentage of accounts loaded into the system.
Victra Stabilization # 2
## Summary ### PG&E Access and LOA Confusion A significant issue was identified regarding Pacific Gas and Electric (PG&E) account access. An unexpected Letter of Authorization (LOA) request was received, creating confusion because internal systems indicated that portal access with Multi-Factor Authentication (MFA) was already successfully established. The core problem appears to be that while the team can pay bills through the portal, the utility is refusing to conduct payment research over the phone without a formal LOA on file. This situation is considered a high-priority item as it is anticipated to be a major point of contention with the client, Victor. An immediate action was taken to pause any LOA submission pending an internal investigation to confirm the exact nature of the existing access and to avoid escalating the situation prematurely. ### Invoice Reconciliation Status The primary focus of the meeting was a large-scale reconciliation effort for client invoices. A detailed report was pulled, identifying approximately 420 accounts that were missing invoices with an October invoice date. Within this larger group, a critical subset of about 55 accounts was highlighted; these are accounts for which no invoices have been received for September, October, or November, making them the highest priority. The team is actively working to determine the status of these 55 accounts, investigating possibilities such as them being closed accounts, having quarterly billing cycles, or being affected by system linking errors. The immediate goal is to resolve the status of these 55 accounts by the end of the day to prevent any potential service disruptions. ### Duplicate Bill Management A major technical challenge impacting the reconciliation process is the prevalence of duplicate bills within the system. It was reported that for many accounts, the system has downloaded five to six duplicate copies of the same bill. This significantly complicates the invoice tracking and payment process. The development team is actively working on a fix for the "download helper" tool, which is expected to prevent these duplicates from being created in the future. In the meantime, the operations team is manually sifting through spreadsheets to identify and correct these output errors to ensure data accuracy. ### Payment Processing and System Linking The payment process itself is under close scrutiny. A specific procedure is in place for payments that have been in a "processing" status for over 20 days; these are being canceled and reissued, with a preference for electronic payments over physical checks. Furthermore, instances of duplicate payments were discovered, where a mock bill was paid and then the actual bill was also processed. These are being identified for cancellation and refund. A parallel effort is underway to address system linking issues, where bills are not properly associated with the correct client account numbers, leading to them appearing as missing. A dedicated resource is working through a list to manually link these accounts and resolve the discrepancies. ### Strategic Reporting and Client Communication There was a extensive discussion about establishing a single, reliable source of truth for key performance metrics to be communicated to the client, Victor. The goal is to confidently report on three key data points: the total number of accounts (aiming for ~2,815), the number of accounts with payments processed or in funding, and the number of accounts remaining. Currently, there is a lack of confidence in the numbers being pulled from various systems (UBM, Power BI, PayClearly), with slight discrepancies causing concern. The decision was made to postpone sharing new metrics until next Tuesday to allow time for the duplicate cleanup and system linking to be completed, ensuring that the reported numbers are accurate and defensible. The ultimate objective is to shift the narrative with the client to confirm that **100% of their bills are being managed and paid**, moving away from a perception of last-minute scrambling. ### Long-Term Vendor Onboarding Issues A separate, ongoing issue involves the onboarding of accounts with San Diego Gas & Electric (SDG&E). This process has been stalled for approximately three months, with the vendor providing the same response daily, attributing the delay to high volume on their end. Despite daily follow-ups and providing all necessary documentation including LOAs and account lists, no progress has been made. The team is considering escalating the issue by having multiple people from the team contact the utility to apply more pressure and determine if there are other individuals or departments within the utility that can be engaged to break the logjam.
Faisal <> Sunny
## Summary The meeting focused on diagnosing and addressing critical bottlenecks in the invoice processing workflow, with a particular emphasis on download delays. The core issue identified is a significant lag-averaging 12 days in October-between when an invoice becomes available and when it is downloaded. This "receive to loaded" delay is now understood to be the primary constraint impacting Service Level Agreements (SLAs), far outweighing processing times within downstream systems like DSS and UBM. A major concern discussed is that the current operational capacity appears to be at its limit. The team is processing a high volume of invoices daily, but this is insufficient to handle monthly spikes and the existing workload. It was emphasized that onboarding any new customers under the current process would exacerbate the problem, leading to a further degradation of SLAs. The conversation explored whether the root cause is systemic, such as uneven daily invoice volumes that create temporary bottlenecks, or related to process and training, where personnel might not be retrieving all available invoices. Automation, specifically through the Arcadia solution, was highlighted as the necessary long-term strategic fix to outsource and streamline the download process. In the short term, the focus will be on gaining better visibility into the problem through data analysis and enhanced reporting to understand invoice date distributions and precisely define capacity limits. ## Wins - **Identification of the Primary Bottleneck:** The main issue causing SLA failures has been successfully pinpointed to the "receive to loaded" timeline in the download process. - **Effective Downstream Processing:** Performance in systems like DSS and UBM is considered relatively strong and is not the primary source of the current delays. - **Progress on Reporting:** Development of a Power BI report is underway to provide better visibility into processing times and will serve as a foundational tool for ongoing monitoring and troubleshooting. ## Issues - **Critical Download Delays:** The average time to download an invoice after it becomes available is unacceptably high, peaking at 12 days in October, which is the main driver of missed SLAs. - **Capacity Limit Reached:** The current team is operating at its maximum capacity for manual downloads. The existing volume, coupled with uneven daily distributions, means the team cannot keep up with demand, especially during peak periods. - **Threat from New Business:** Onboarding new customers is not feasible without first resolving the capacity constraint, as it would immediately worsen the situation. - **Process and Data Gaps:** There is a lack of clarity on whether the download issue is purely a capacity problem or also involves process gaps, such as personnel not downloading all available invoices. Data on invoice date distributions to identify daily spikes is also currently unavailable. - **System Duplication:** Issues with FDG Connect causing some invoices to be processed multiple times are consuming valuable bandwidth, though this is a secondary concern to the core download delay. ## Commitments - **Data Analysis on Invoice Distribution:** An analysis will be conducted to graph invoice dates for specific customers (e.g., Victra) to identify if daily volume spikes are a contributing factor to the delays. - **Enhanced Reporting:** Development will continue on the consolidated Power BI report, with a commitment to add features like filtering by customer type (e.g., "Bill Pay") to provide more targeted insights. - **Investigation into System Issues:** A deeper dive will be undertaken with the relevant teams to investigate and resolve the duplication issues occurring in FDG Connect. - **Contract and Role Transition:** The process for converting the contract into a full-time position has been initiated with HR, and both parties will track progression toward this goal.
LAB Report
## Summary ### Invoice Lifecycle Report Analysis The primary focus was on evaluating the newly requested invoice lifecycle report and its relationship to the existing "lab" report. The discussion centered on whether the new report provided unique value or if its functionality could be merged into the established system. It was determined that the lifecycle report was strikingly similar to the existing lab report, leading to a strategic decision to avoid creating redundant, stopgap solutions. The consensus was to enhance the lab report with any necessary additional columns from the lifecycle report, thereby creating a single, authoritative source of data for tracking invoice statuses, particularly for high-priority clients like Victra. ### Escalating Duplicate Invoice Issue A critical and worsening problem with duplicate invoice downloads was identified. The team has observed a dramatic increase, from historically around 50 duplicates per month to now experiencing *hundreds of duplicates per day*. This surge is placing a significant manual burden on the team, requiring them to review and delete thousands of entries individually. While the system's duplicate check is functionally marking items correctly, the sheer volume indicates a fundamental breakdown in the process that prevents these duplicates from being downloaded in the first place. The example of the "Blue Ridge Mountain EMC" account was highlighted, where the same invoice was downloaded over ten times across multiple days, demonstrating the severity of the issue. ### Root Cause of Duplicate Downloads The investigation into the duplicate problem pinpointed a likely issue with the "snooze" function within the FDG Connect / Web Download Helper system. The intended behavior is for a processed invoice to disappear from the download queue until its next billing cycle. However, evidence suggests that tasks are repopulating in the download helper *after* they have already been processed and completed. For instance, an invoice processed on September 14th was subsequently downloaded again on September 26th, 30th, and multiple times in October, which should not be possible. This indicates a potential bug where the system is merely snoozing tasks for two days without respecting the invoice's processed status or its full billing cycle duration. ### Account Validation and Processing Errors A separate but related issue involves invoices being processed under incorrect account numbers. The system's current validation is insufficient to catch when a user manually uploads a bill to the wrong account in the Web Download Helper. In one case, an invoice for account 6300 was uploaded and processed under account 6301. The lack of a mandatory, human review step for every bill means these errors can flow through the system unchecked. This problem is compounded by past issues with the Web Download Helper interface, such as account lists reordering, which increases the likelihood of user error during manual uploads. ### Data Reporting and System Integration The conversation also covered challenges with data consistency across different reporting systems. There was confusion regarding a specific Power BI report, as the origin and data definitions for columns like "Invoiced/Received" and "Received/Loaded" were unclear. The team expressed skepticism about how an external system could accurately track when a physical bill was received via mail versus downloaded. This highlights a need for clearer data lineage and definitions between the Data Services System (DSS), the legacy system, and any external reporting tools to ensure all stakeholders are working from the same understood metrics.
Victra Stabilization # 1
## Summary The meeting focused on the critical and urgent need to stabilize the billing and payment processes for a major client, Victor, following persistent issues that have led to service disconnection notices and significant financial damages. The situation is described as "shameful" and requires immediate, all-hands-on-deck resolution by the end of the week, especially with the crucial holiday retail season approaching. The core of the problem is a multi-faceted breakdown in the invoice-to-payment lifecycle, involving delayed downloads, systemic duplicates, and processing bottlenecks. The team is tasked with conducting a deep-dive analysis to identify every point of failure and implement solutions to ensure all ~2,800+ monthly accounts are paid on time to prevent catastrophic service interruptions and further financial penalties. ### The Urgent Business Crisis The client situation has escalated to a critical level, demanding immediate and comprehensive action from the entire team. The stability of the Victor account is paramount, with significant financial and contractual repercussions already in motion. - **Severe Business Impact**: The client is preparing to charge $65,000 in damages for issues to date, with the potential for more charges or even contract cancellation. This problem is noted to have a potential ripple effect across the entire organization. - **Imminent Operational Risk**: The upcoming Black Friday and Christmas holiday season represents the client's peak revenue period. Any service disconnections during this time would be catastrophic, as their store employees work on commission and lose income when systems are down. - **Clear Deadline**: A firm deadline has been set to solve these issues by the end of the week. The team must achieve a state where they are comfortable that all payments for November will be made on time to avoid further disconnection notices. ### Establishing the Account Baseline A primary step was to confirm the exact number of active accounts to establish a clear target for monthly payments and identify discrepancies in the system. - **Active Account Count**: After reconciling data, the team established there are **2,814 active accounts** that require monthly payment. This figure was derived from an initial 3,138 accounts, minus 184 closed accounts and 141 accounts that are open but flagged for closure or linking to another account. - **Credentials and Invoice Sources**: Of the 2,814 accounts, 2,420 have credentials set up for portal-based invoice downloads. The remaining ~300+ accounts receive paper or emailed invoices, which are manually processed. A specific challenge was noted with 51 accounts from utilities SDG&E and SoCalGas that still lack portal access. ### Diagnosing the Invoice Processing Bottlenecks The central focus of the discussion was a detailed forensic analysis of the invoice lifecycle, using a custom report, to pinpoint where delays and failures are occurring. - **The Critical Delay**: The most significant bottleneck identified is the time between an invoice becoming available and it being downloaded into the system. This initial step is consuming a large portion of the total processing time. - **The Mock Bill Dilemma**: A specific issue was uncovered where "mock bills" created on September 30th are disrupting the automated download schedule. In one examined case, an invoice with an October 9th date was not downloaded until November 2nd, and the subsequent November invoice had not been downloaded by the meeting date (November 19th), risking a late payment. - **Systemic Duplicate Problem**: The data is plagued by a high volume of duplicate bill downloads, estimated at around 3,000 entries. This creates massive inefficiency, as team members waste time processing the same invoice multiple times, and it obscures the true state of what has been processed. ### Technical and Systemic Hurdles Beyond the initial download phase, several technical issues within the Data Services (DS) and UBM systems are creating further delays and operational headaches. - **Data Services (DS) Processing**: While improving, the DS system still takes an average of 3 days to process invoices after they are loaded. A major contributor to this is the "broken batch" issue, where a single invalid bill in a batch causes the entire batch (sometimes 100+ good bills) to be deleted and reprocessed. - **UBM Mapping and Validation**: Invoices are getting stuck in IC&DV (Invoice Coding and Data Validation) and DVO2 stages, often due to missing location mappings or other attributes. While some automatic mapping is in place, a number of accounts still require manual intervention and information from the client. - **Payment Method Inefficiency**: The continued use of paper checks was highlighted as a risk factor due to longer mailing times. A push was made to urgently increase the number of payments made via ACH and credit card to reduce the payment execution timeline. ### Path Forward and Action Plan The meeting concluded with a clear directive to make the diagnostic report actionable and to prioritize fixes that will have the most immediate impact on stabilizing the process. - **Making Data Actionable**: The team will work overnight to cleanse the report of duplicates and use it to create a definitive list of accounts requiring immediate action for November invoices. This includes downloading missing bills and ensuring all 2,814 target accounts are accounted for. - **Targeted Technical Fixes**: Key technical improvements were prioritized: fixing the duplicate download logic, changing the system to snooze bills for only one day (instead of two) to speed up reprocessing, and modifying the "broken batch" logic to fail individual bills instead of entire batches. - **Daily Tracking and Accountability**: The team must synchronize around the goal of processing and paying for all 2,814 accounts each month. A clear, daily countdown or tracking mechanism is needed to provide visibility and ensure the target is met.
UBM-DS Integration
## Data Integration Between UBM and Data Services The meeting focused heavily on establishing data connectivity between UBM and Data Services to enable cross-platform reporting capabilities. The immediate priority involves creating a mechanism to pull data from UBM into Data Services' Power BI environment, with future plans for bidirectional data flow. - **Initial Data Flow Strategy**: The team decided to start by pulling UBM data into the existing Data Services Power BI infrastructure rather than creating new accounts for UBM users. This approach allows leveraging current access permissions and avoids the complexity of account management across organizations. - **Technical Implementation Details**: Discussion centered around identifying the UBM database location and establishing secure connections, potentially requiring IP whitelisting or other authentication methods. The UBM data resides in a read replica rather than the production database, which may simplify access. - **Report Structure Planning**: A new dedicated workspace will be created within Power BI specifically for UBM-related reports, maintaining logical separation from existing Data Services reports while enabling future combined reporting capabilities. ## Infrastructure Optimization and Scaling The team reviewed current system performance and identified opportunities to optimize resource allocation based on recent stability improvements. - **VM Scaling Reduction**: With recent performance issues resolved, the team approved scaling down virtual machine resources to more appropriate levels. This optimization will proceed gradually with monitoring to ensure system stability. - **Client Version Management**: An outdated client version was identified, though it's not currently causing operational issues. The team noted the need for a streamlined update process for non-technical users. ## Workflow and Process Improvements Several operational workflows required attention, particularly around duplicate resolution and batch processing failures. - **Duplicate Resolution Enhancement**: The duplicate resolution application will be updated to include checksum validation when identifying duplicate bills. This addresses an issue where bills sent to duplicate resolution weren't being properly matched due to missing metadata fields. - **Batch Processing Optimization**: To minimize the impact of processing failures, the team discussed breaking batches into individual bill processing rather than handling complete batches. This change would isolate failures to single bills rather than affecting entire batches. - **Error Logging Standardization**: Multiple validation checks have created fragmented error logging, requiring consolidation into a unified logging approach for better monitoring and troubleshooting. ## Future Planning and Automation The discussion included longer-term considerations for automating data exchanges and preparing for expanded service offerings. - **Bidirectional Data Exchange**: Future requirements include automated data flows from Data Services to UBM, potentially using the same infrastructure being established for the initial UBM-to-Data-Services connection. - **Expanded Service Offering Preparation**: The logging and monitoring improvements will support plans to offer Data Services to additional Constellation Navigator customers in the future, requiring robust, scalable infrastructure. - **Ticket Management Process**: The team emphasized using JIRA for tracking all work items, with integration to Teams messages to automatically create tickets with proper context and references.
Medline Funding Analysis
## Summary The meeting centered on resolving payment processing and funding discrepancies with a business partner, specifically analyzing why certain invoices incurred late fees and identifying systemic issues in the funding reconciliation process. ### Funding Timeline and Discrepancies A detailed examination of a specific payment batch revealed confusion regarding funding dates and the root cause of delays. The discussion clarified that while funds were received within a reasonable timeframe, a misalignment in how dates are recorded by the partner's team is creating perceived delays. - **Investigation of a specific payment:** A payment of $1.66 million was received on November 19th, which was deemed timely. The partner's system, however, showed a "doc date" of November 15th, which was impossible as the payment file was not sent until November 16th. This indicates an internal logging issue on the partner's side that misrepresents the timeline. - **Identifying the core issue:** The primary problem is not necessarily the speed of funding but the accuracy of the amounts funded and the misalignment of internal dates, which complicates the analysis of what caused specific late payments. ### Payment File Reconciliation Process The core of the operational challenge is reconciling the amounts sent in daily payment files with the amounts actually funded by the partner. - **Ideal vs. Actual Process:** The ideal process involves sending a daily "Medline payment file" with a total amount due, which the partner should fund exactly. In practice, the partner often combines multiple payment files into a single transaction and sometimes funds an amount that does not match the sum of the files. - **Manual analysis is required:** A significant amount of manual effort is needed to match the funded amounts received against the individual payment files sent. This process is complicated by instances of underfunding and the aggregation of files, making it difficult to pinpoint which specific invoices were delayed due to a lack of funds. ### Impact of Underfunding and Aggregation The meeting detailed how the partner's practice of aggregating payments and occasional underfunding directly impacts the payment of invoices. - **Consequences of underfunding:** When a payment file is underfunded, it creates a funding gap. Pay Clearly, the payment processor, must then manually decide which invoices within that batch can be paid with the available funds, leading to delays for the remaining invoices. These delays can have a cascading effect, as subsequent files may also be impacted. - **Challenges in attribution:** It is difficult to directly link a specific late fee to a specific instance of underfunding that may have occurred weeks prior, as the funding is often "caught up" in a later, lump-sum payment. ### Defining the Problem and Strategic Approach The conversation shifted to formally defining the problems and deciding on a strategic response to the partner. - **Clarifying the problem statement:** The fundamental issue is confirming that the partner funds each payment file with the exact amount requested. The secondary issue is the partner's practice of combining multiple files, which, while not a problem in itself if funded correctly, complicates reconciliation. - **Strategic communication:** A plan was discussed to reply to the partner by highlighting two key points: the operational difficulty caused by combining multiple payment files into single transactions, and the confirmed instances where files were underfunded, leading to payment delays. ### Resolution and Path Forward A decision was made on how to proceed, balancing the need for a resolution with the reality of limited resources. - **Forgoing a deep-dive analysis:** It was agreed that conducting a exhaustive, line-by-line analysis of all historical payment files to dispute late fees is not a prudent use of time given current resource constraints. The effort required would be disproportionate to the amount of money in dispute. - **Proposed solution:** The path forward involves sending a communication to the partner that firmly reiterates the need for them to fund payment files on a one-to-one basis. The goal is to use clear examples of funding inaccuracies to convince them to change their process, thereby preventing future issues, rather than spending excessive time litigating past late fees.
Review Mapping Location Logic
## Summary ### Virtual Location Mapping Progress Significant progress was made on mapping previously unmapped virtual locations for billing data. Out of an initial batch of approximately 700 locations, around 600 were successfully mapped and approved. The remaining ~100 present a greater challenge as they are tougher to verify, primarily because they are not matching against the expected "last string" data, which is a crucial identifier for Banyan customers. The team is actively working on improving the matching logic to increase the automation rate and reduce the manual review burden. - **Processing a new, updated dataset:** A new data export was initiated to capture the most current state of live bills, which now number around 1,800, with 2,500 bills in total across the system. The updated mapping logic will be run against this new dataset for a fresh round of review and approval. - **Focus on streamlining approvals:** A key objective is to refine the process so that approvals for mapped locations are as effortless as possible for the team, minimizing the time and effort required for verification. - **Future development for comprehensive mapping:** A more advanced mapping logic that considers all six data points constituting a virtual account is planned. However, this enhancement is estimated to be at least a week away due to its complexity, meaning the current process will continue on a daily basis for the immediate future. ### Billing Data Verification and Processing The team is managing a substantial volume of bills in the data verification stage, with a focus on clearing errors and processing bills for payment. The daily throughput for the data verification team is typically between 100 and 150 bills per person. A specific query-based script was previously used to process a large batch of bills, successfully clearing out 1,700 bills from a particular environment. The team acknowledged that while this method is effective, it carries a inherent risk of missing certain edge cases where "mock" bills and "actual" bills might be out of sequence, potentially leading to double payments if not carefully managed. - **Addressing specific customer billing issues:** For the customer "Ascension," specific scripts are being run to handle bills dated before and after November 1st. This involves special processing to flip project codes, remove capacity balances for current charges, and re-parse the bills to resolve errors important to the customer. - **Clarification on bill types:** The discussion clarified the distinction between "history" bills and "special" bills. History bills are for periods prior to a customer's official go-live date and are loaded as a separate charge. Special bills are typically used for Accounts Payable (AP) or Bill Pay customers, often to delay or control payment. - **External reporting and expectations:** There is an established routine for reporting numbers to an external stakeholder, with updates typically sent around 8:00-8:30 AM and again in the evening. This reporting explicitly excludes mock bills to provide an accurate count of unprocessed, actual bills. ### System Enhancement Requests A prominent feature request was discussed to improve the usability and accuracy of internal tools. The current "product" field displayed in the system is not meaningful for operational prioritization, as customer tiers like "Gold" or "Platinum" do not consistently correlate with their service type (e.g., Bill Pay, AP). The proposal is to replace this field with the "service level" across the system, as this is the primary field used for prioritization and reporting. - **Need for better filtering and reporting:** The absence of a service-level filter makes it difficult to quickly answer critical questions, such as how many of the 3,000 live bills are for Bill Pay customers. This currently requires manually maintained and error-prone lists of customers, which becomes unsustainable as the customer base grows. - **Impact on operational efficiency:** The lack of this filter has led to reporting inaccuracies in the past, as team members can inadvertently miss customers when compiling numbers manually, highlighting the need for a more reliable, system-driven solution. ### Data Integrity and Other Investigations Several other data-related issues are under investigation to ensure the accuracy and reliability of the billing information. A known issue of double-counted usage is being looked into, with initial observations suggesting the problem may be isolated to the "distribution" component of bills and not the "supply" part. The team has received a detailed list of affected bill IDs to aid in the investigation. - **Clearing specific data errors:** There is an ongoing effort to address bills with specific errors related to "unit of measure," such as power factor and loads factor. While the volume of these errors is low, clearing them is considered a priority to improve overall data quality. - **Testing enrollment bill processing:** A question was raised about the readiness of the system to process "enrollment bills" through the DSS pipeline. It was determined that this specific workflow has not been formally tested, indicating it is not a near-term priority and its implementation complexity is not yet fully understood. ### Customer-Specific Workloads and Communications The team is preparing for an increased workload related to the customer "Victor," as a significant number of their bills with November due dates have been processed. This is expected to trigger inquiries from the customer, and the team is proactively aligning on how to respond. Furthermore, there is external pressure from the sales team regarding the development of a new "lab" or reporting dashboard ("LED") that accurately reflects processed "actuals." - **Managing external expectations:** There is a clear gap between the sales team's promises or expectations to customers and the current operational reality and development priorities. The development team's resources are not currently focused on building this requested dashboard, as they are occupied with more critical data processing tasks. - **Internal demo and testing support:** The team is also handling requests to load a limited number (e.g., 10) of accounts or bills for a broker named "Evolution" for demonstration purposes, balancing this ad-hoc support with their core responsibilities.
Power BI -> Fabric Migration
## Summary ### Invoice Processing and Migration Significant progress was made on processing and migrating invoice data, with a major automation script completed. A script for processing non-EDI invoices was successfully run for approximately 5,000 records, representing the bulk of the pending items. This script was enhanced with a checkpoint feature to remember its last execution point, ensuring efficiency and preventing duplicate work. The migration of the old SSRS lab report to Power BI is also nearing completion, with the final step being the replication of the exact view for querying in the new environment. The old report is already accessible within the Power BI report system. ### Power BI Report Development and Troubleshooting Active development and troubleshooting were conducted on several Power BI reports, with a focus on recreating existing system functionality and resolving access issues. The "System Status New" Power BI report was successfully recreated to show bill counts by workflow step. A persistent error when opening the report in Power BI services was identified and resolved during the meeting; the issue was related to the specific file version that had been published. The solution involved running and republishing the correct file from the Power BI Report Builder, after which the report loaded successfully. ### Data Archiving Strategy A strategy for archiving historical data was discussed to manage the large volume of records in the database. With approximately 6 to 7 million records marked with a "complete" status, some dating back to 2017, a decision was made on the scope of the archiving process. The plan is to archive all records from prior to the current year, while retaining the current year's data for troubleshooting and review of any failed system processes. The implementation of the archiving logic is pending final confirmation before the automated script is executed. ### System Status Report Enhancements The newly created "System Status New" report was reviewed, and a list of future enhancements was outlined to improve its clarity and usefulness. A key observation was made regarding the report's calculation of totals: the "Bills" column only shows pending bills and excludes those in a "ready to send" status, whereas the "Total" column includes all bills. Planned improvements include: - **Reordering and renaming columns:** To make the data presentation more intuitive and logical. - **Filtering out unassigned records:** Ensuring the view only includes builds associated with a specific client to maintain data relevance. - **Adding column descriptions or tooltips:** To provide clear definitions for each workflow step, aiding user comprehension. - **Enabling detailed drill-downs:** Investigating the possibility of expanding a customer row to view individual builds or batches for more granular analysis. ### Future Integrations and Access Preparations are underway for future system integrations and to resolve pending access requirements for the team. There is an intent to add new columns to the system status report for other business units like UBM, which are not currently included. A follow-up is required on a pending software license, as the vendor has not responded to recent emails. Securing this license is crucial for providing the team with the necessary access to the FDG connect system to proceed with upcoming work.
Simon (Internal) EDI / non EDI Scripts etc
## Account Processing Logistics The meeting focused on establishing the procedures for handling non-EDI accounts, which involves a multi-step process for managing approximately 9,000 bills. The immediate plan is to begin processing these accounts by first storing the exported data locally. A dedicated SharePoint folder will be provided later for centralized uploads, to which access has already been granted. - **Batch Processing Strategy**: To ensure system stability and data accuracy, it was recommended to process the accounts in small batches of around 25 at a time. This cautious approach allows for verification that the correct number of folders is being generated and helps prevent any potential system failures during the bulk operation. - **Performance Monitoring**: A request was made to monitor how the processing script affects local machine performance, specifically regarding any throttling, to inform future scaling efforts. ## Automated Enrollment Ambiguity A significant portion of the discussion centered on a major uncertainty regarding the automated enrollment of new accounts. A question was raised about whether the system could automatically handle the setup for a vendor or client upon receiving their first invoice, but it was confirmed that this capability had never been developed or discussed. - **Functional Gap Identification**: The current system lacks the functionality to automatically create new accounts in the database when a bill from a previously unknown vendor is processed. This represents a substantial gap, as the work required to build this feature is estimated to be around 80% incomplete. - **Implication of Manual Work**: This gap means that for any new account encountered within the 9,000 bills, the setup will have to be performed manually, which is anticipated to be a significant and labor-intensive task. ## System Workflow for New Accounts The conversation explored what currently happens when a bill for a completely new account enters the system, revealing a lack of clarity on the exact workflow. It is believed that a new vendor or client is created in a system called PG Connect, but the specific process within other systems like DDS remains unclear. - **Need for Further Investigation**: The ambiguity surrounding the enrollment process for new customers was described as a "black box," highlighting a critical area that requires immediate follow-up and clarification with other team members. - **Potential System Usage**: There is a high degree of confidence (95%) that for secondary setups, a system called AKA Gas is being utilized, though this does not solve the primary issue of initial enrollment. ## Technical Setup and Access Practical steps were taken to facilitate the work, including the setup of a collaborative digital workspace. The necessary permissions were granted during the meeting to a specific folder, enabling the other participant to view and edit its contents. - **File Management Protocol**: The established procedure is to run the processing script, store the results locally initially, and then subsequently upload the files to the designated "non EDI setup build" folder in SharePoint once it is fully prepared.
Arcadia Plan
## Summary ### Script Execution for Bill Retrieval The primary method for acquiring bill data involves running a script against a list of non-EDI accounts, though this process faces several immediate challenges. A key issue is the inability to implement a requested file naming convention that includes client details, as this requires substantial development work on the ingestion piece that cannot be prioritized at this time. While the script has been tested on small batches of around 100 accounts with manual verification, the output currently uses random serial numbers instead of descriptive names. Furthermore, a significant portion of the accounts, 563 out of the initial 7,412 non-EDI list, have image links that are "not available," which will require clarification from the client. ### Account Volume and Data Complexity The total scope of the project involves 8,823 meters, which are broken down into distinct processing categories. The non-EDI list, targeted for the initial script run, contains 7,412 accounts, while the EDI list has 2,457 accounts. A major complicating factor is that approximately 1,600 accounts have not generated an invoice since May, and their status as active or closed is unknown, creating a risk of setting up dormant or final-bill accounts in the platform. Reconciling the total counts is also a challenge, as the provided spreadsheet contains multiple account number columns, making it difficult to definitively identify EDI versus non-EDI accounts for accurate processing. ### Vendor Credentials and Onboarding Strategy A central dilemma is how to onboard the vast number of accounts without possessing the necessary utility website credentials, which the current vendor manages and cannot share due to confidentiality. This makes traditional credential-based onboarding impossible. A potential solution being explored is partnering with a third-party service, Arcadia, which specializes in programmatically retrieving bill data using credentials. However, the contract and security review process with Arcadia is still ongoing, with several stages remaining before they can be formally engaged. There is also a concern that setting up new credentials could disrupt the existing ones managed by the current vendor before the client has officially terminated that relationship. ### Project Timeline and Go-Live Dependencies The project timeline is under significant pressure, with an optimistic internal go-live target of April, which the client may find unacceptable. The delay is attributed to a late start and the recent receipt of the final account list. A critical dependency for go-live is the development of a payment file in both CSV and a specific text format that the client's system can ingest; without this, loading accounts is irrelevant. The client has a 60-day termination clause with their current vendor, and while there is willingness to request an extension, a firm go-live date cannot be committed to until the onboarding strategy is finalized and the payment file development is underway. ### Data Services and Initial Platform Population To unblock other workstreams and test the processes, a small batch of bills will be processed and pushed into the platform via Data Services. This initial load of 10-25 accounts, representing a mix of vendors and commodities, will serve multiple purposes: it will provide data for the payment file development, allow for testing the enrollment process, and give the client a tangible demonstration of progress. A decision on whether to scale this Data Services approach for the entire portfolio or rely on Arcadia is needed soon, as it has major implications for resource allocation and timeline. ### Reporting Requirements and Clarification The client has provided sample reports from their current vendor that need to be replicated. While initial conversations suggested a good mapping to existing platform capabilities, a detailed gap analysis is required. The reports in question include a billing details dump, an accruals report, and an approval file that involves cost apportionment across multiple general ledger accounts. The plan is to internally review these reports, document assumptions, and then schedule a dedicated call with the client to confirm the mappings and identify any missing data points before development begins. ### Arcadia Partnership and Vendor Analysis To evaluate Arcadia's capability to handle this volume, a proposal was made to share a sanitized vendor breakdown from the account list with them. This would allow Arcadia to assess the portfolio's composition by vendor and commodity and propose a concrete onboarding plan and timeline based on their experience and utility coverage. This engagement is seen as a critical test; if Arcadia can successfully manage this complex portfolio, it would validate their service for future engagements, but if they struggle, it would indicate that the internal process is more robust than anticipated.
LAB Report
## Summary The meeting focused on reconciling discrepancies between two separate lab reporting systems used for tracking late-arriving bills: the Data Services (DS) lab and the UBM lab. The primary goal discussed was the creation of a new, unified view that combines data from both systems to provide a single source of truth for internal visibility and action. ### The Core Problem: Disconnected Systems The fundamental issue is that the DS lab and UBM lab operate independently and are not synchronized, leading to conflicting data on bill status. The DS lab provides a live view of what bills have been captured by the download helper tool, while the UBM lab reflects the status of bills that have been fully processed and pushed into the UBM system for payment. This creates a lag and disconnect, where an account might appear as "late" in UBM even though the bill has already been received and is present in DS. ### Data Services (DS) Lab Report This system offers a real-time, collection-oriented view focused on the physical receipt of bills. Its methodology is based on the actual capture date of bills and uses historical data to project future bill arrivals. - **Live view of captured bills:** It shows what has been physically downloaded and is currently in the DS system's view. - **Calculation methodology:** The projected next received date is an estimate, calculated by comparing the last four bills for an account to establish an average cycle. - **Primary function:** Its main purpose is to guide the collection team in knowing which bills to download and process next. ### UBM Lab Report This system is designed for internal operational visibility and actionability within the UBM platform, focusing on the billing cycle and due dates for payment processing. It consolidates data at the billing account level, which is a combination of vendor and account number. - **Visibility and action within UBM:** It allows operators to see which accounts are within a certain number of days of their due date so they can take action, such as submitting mock bills. - **Calculation methodology:** It uses the invoice date and due date from processed bills to determine what is "late" or "missed," and it also allows for manual overrides to correct for anomalies in bill cycles. - **Primary function:** To enable operational teams to prioritize work and ensure timely payments by identifying gaps in the processed bill data. ### The Proposed Unified Solution The agreed-upon path forward is to develop a new report, likely in Power BI, that merges data from both the DS and UBM systems. This combined view is intended for internal use only, at least initially, to diagnose processing delays and improve operational efficiency. - **Combining data sets:** The new report will pull data from both DS and UBM, presenting key dates (like received date and invoice date) side-by-side with clear labels indicating their source system. - **Internal diagnostic tool:** The main value is in identifying discrepancies. For example, if a bill's received date in DS is recent but the corresponding processed date in UBM is old, it signals a bottleneck in the processing workflow that needs to be actioned. - **Phased implementation:** The initial goal is to establish a foundational view with the raw, combined data. More advanced features, like the custom calculations and summary pivots currently managed manually in an Excel sheet, will be considered in a second phase. ### Client-Facing Considerations and Risks The discussion highlighted the complexities and potential risks of exposing the combined data view to clients, leading to the decision to keep it internal for the time being. - **Current client access:** Some clients currently have admin access to the UBM lab data or receive daily reports derived from it, which already shows processing timelines. - **Risk of misinterpretation:** Exposing the DS receive date could lead clients to incorrectly calculate SLA breaches based on the time a bill sits in DS before being processed in UBM, even though the official SLA likely starts from the UBM loaded date. - **Long-term strategy:** The ideal scenario is that processing becomes so efficient that clients have no need for a late-arriving bills report. The internal combined view is a tool to help reach that level of performance. The future of granting clients direct admin access to such data was also questioned.
UBM Project Group
## Summary ### Project Charter and Executive Communication A key outcome from a recent executive meeting is the need to create a formal project charter and a subsequent summary for George. The charter must contain a clear problem statement that outlines the entire chain of processes, identifies what is broken, and connects proposed solutions to tangible business outcomes like avoiding missed payments or late fees. The summary for George should be factual and concise, consisting of a paragraph or a few bullets from each team lead explaining how this project will drive better results for their teams. The approach is to draft the charter first for team review and alignment, then use it as a basis for the executive email. ### Progress on Active Workstreams Several ongoing initiatives are tracking toward their scheduled completion dates, with significant progress reported across multiple teams. - **CSM Tool Access:** This initiative is nearly complete and on track for the end of November. The final steps involve determining where to place access links for customers and planning subsequent training for Customer Success Managers (CSMs) to utilize the new tool. - **Credential Synchronization:** The automated sync of credentials from Data Services (DS) to the UBM system is now operational, running on a weekly basis. Early feedback indicates that the newly available UBM credentials are working correctly for the Lab team. A remaining data integrity issue was identified where DS is not always synchronized with its source, Smartsheet, but this is acknowledged as a larger, separate process issue. - **Pay KD Files Update:** Development work for this update is scripted and has been prioritized. It is considered a relatively simple task with an expectation of completion and QA sign-off by the end of December. - **Invoice Processing:** Good progress has been made, and this workstream is on track for a November completion. The final task involves implementing and fine-tuning monitoring for the new processing system. - **Emergency Payments:** Phase one, which involved splitting the AP file, is complete. Work is now in progress on the AP file creation piece specifically for emergency payments, with an expected completion by the end of the month. ### Data Reporting and Credential Management Strategy The discussion focused on leveraging newly accurate data to improve long-term credential management and reporting, moving away from manual Smartsheet tracking. - **New Power BI Report for Credentials:** With accurate credential data now in UBM, the next step is to create a new Power BI report. This report would provide a comprehensive view of all billing accounts, the credentials available for them, and a clear list of what remains to be created or confirmed. - **Integrating MFA Status:** There is a need to incorporate Multi-Factor Authentication (MFA) requirements for vendor portals into this unified view. This would inform team members upfront if MFA is needed when accessing an account. - **Centralizing Onboarding Data:** A significant gap was identified in how bill access methods (portal, mail, email) decided during customer onboarding are tracked. Currently, this data resides in a transient onboarding sheet and is manually moved to Smartsheets, leading to visibility issues. The proposed solution is to define a single, reliable source-potentially by having CSMs upload a standardized file to HubSpot or using a fields in the Upsell API-where this data can be stored and then automatically fed into all relevant reports, including the new credential tracker. ### December Planning and Unplanned Work Looking ahead to December, the validation errors workstream was confirmed as the next priority. Additionally, the teams completed several unplanned, high-priority Power BI reports that addressed the project's core problems. It was agreed to retroactively add these as a completed "unplanned work" line item for November to fully capture the team's output and demonstrate problem-solving agility.
OpenAI Monitoring
## Summary ### Output Service Development Progress Significant progress has been made on architecting the new output service, with a clear plan established for data flow through a queuing system. The development has advanced to the stage where multiple dedicated queues have been created to manage the workflow. The architecture is designed to process data in distinct stages: one queue for compiling the data, another for validating it, and a final one for outputting the processed information. While the foundational structure is in place, the actual implementation and coding for the logic within these individual queues is still in the early phases and requires further development. A dedicated follow-up discussion is planned to review this architecture in more detail and establish clear timelines for its completion. ### OpenAI Monitoring and Processing Pipeline An analysis of the OpenAI processing pipeline revealed an unusual operational pattern that requires further investigation. The monitoring dashboards show that the system actively processes bills for only half of the day, remaining idle for the remaining time. The initial hypothesis is that this is a positive sign, indicating that the pipeline is highly efficient and processes the entire daily queue quickly, leading to periods of inactivity because there is no backlog. However, an alternative possibility-that there is an underlying issue preventing continuous processing-must be ruled out. A thorough check of the POWER BI reports and the Azure IPS queue will be conducted to confirm that no data is stuck in the acquisition phase and to fully understand the pipeline's behavior. ### Infrastructure Monitoring with Grafana The adoption of Grafana for infrastructure monitoring is being aggressively expanded to provide greater visibility and operational intelligence across all services. The current focus is on extending Grafana's capabilities beyond the existing Microsoft queue visualizations. A key initiative is to integrate monitoring for the internal application queues, which will provide real-time insights into backlog and processing volumes. Furthermore, there is a strong push to leverage Grafana's full potential across the entire tech stack, with plans to implement tracking for application errors, user errors, and various processes. This comprehensive approach aims to create a centralized observability hub, making full use of the investment in this top-tier tool to improve system reliability and performance.
Plan for Arcadia
## Summary The meeting focused on critical challenges and strategic solutions for data processing and credential management, with a strong emphasis on preparing for the upcoming Arcadia integration. Discussions revolved around architectural improvements, data validation processes, and the need for a new, configurable data interpretation layer to solve persistent issues across platforms. ### Urgent Credential Management Challenges A pressing issue involves the synchronization and accuracy of client credentials across multiple systems, which is critical for the imminent Arcadia integration. The current process is fragmented, leading to outdated credentials that prevent accurate data pulls. - **System Synchronization Issues:** A recent attempt to sync the Data Services (DSS) credential table with the UBM platform revealed that the data was outdated compared to the primary source, Smartsheet. This creates a fundamental reliability problem for all downstream processes. - **Preparation for Arcadia:** Arcadia requires a single, accurate set of vendor credentials to pull data. A plan was discussed to query all existing systems (Smartsheet, DSS, UBM) for credentials, identify duplicates, and send the most likely accurate set to Arcadia by a target date of December 15th, accepting that some may be incorrect initially. - **Communication and Timeline Risks:** The Arcadia initiative has not yet been formally communicated to the Data Services team, creating a potential bottleneck. Furthermore, there is significant pressure from clients expecting a January go-live, but internal delays mean this timeline is already at risk, with the feasibility of a two-week turnaround by Arcadia being questioned. ### Strategic Arcadia Integration and Data Validation The integration with Arcadia is a pivotal project that requires a carefully phased approach to data ingestion and validation to ensure system stability and data accuracy. - **Dual-Path Data Ingestion Strategy:** A two or three-step ingestion process was proposed for Arcadia data to mitigate risk. This would start with processing Arcadia's PDFs through the existing DSS pipeline for a quick start, then gradually transitioning to using Arcadia's direct JSON output once its reliability is validated against the PDF-derived data. - **Proactive JSON Validation:** To preempt future processing errors, there is a plan to acquire sample PDFs and JSONs from Arcadia. These will be run through DSS to compare the outputs, identifying potential validation errors and mapping discrepancies before full-scale integration. - **Long-Term Architectural Goal:** The ultimate objective is to confidently parse and normalize JSON directly from Arcadia, attaching the original PDF for reference within the platform without initially pushing it through the PSS system, thereby streamlining the workflow. ### Architectural Overhaul: The Interpreter Concept A significant portion of the meeting was dedicated to designing a new, application-specific data enrichment layer, referred to as the "interpreter," to resolve recurring data mapping and validation issues. - **Solving Post-DSS Challenges:** The core problem identified is that DSS produces a standardized output, but applications like UBM and Carbon Accounting have unique, evolving logic requirements for mapping this data (e.g., matching bill IDs to virtual accounts using historical data). This logic is currently handled manually or through one-off scripts. - **Separation of Concerns:** The proposed "interpreter" would be a configurable service that sits between DSS and the application-specific import services. It would enrich the DSS output with application-specific logic (e.g., UBM Enrich, CA Enrich) without altering the raw, source-of-truth data from DSS. This ensures that different applications can have their own mapping rules without affecting each other. - **Modular Workflow Design:** This concept aligns with a broader architectural shift towards making DSS workflows more modular. The idea is to have different ingestion workflows for different data sources (like Arcadia), with the interpreter/enrichment layer acting as a plug-in module within those workflows to handle application-specific data massaging before the final import. ### System Reporting and Operational Visibility Updates were provided on efforts to improve operational reporting and visibility, which is crucial for managing client expectations and internal accountability. - **System Status Report Redesign:** A rebuild of the system status report is in progress, shifting the view from a "meters" basis to a more intuitive "builds" basis. This change aims to provide clearer insights into the pipeline's health and performance. - **Clarifying Ownership and Workflow:** The updated report will include clearer descriptions and column ordering to define ownership and workflow stages explicitly. The goal is to reduce the volume of manual, ad-hoc inquiries about build statuses and missed payments by providing a self-service, filtered view for customers and internal teams. - **Acknowledging Reporting Delays:** It was noted that while several new, urgent reports have been delivered, the originally promised reports for November and December are behind schedule. This will be communicated to stakeholders with the context that new priorities have displaced the original plan.
DSS/UBM Errors
## Summary ### Mapping Location Error 3018 Resolution A new fuzzy matching logic has been implemented to resolve the 3018 mapping error for virtual accounts, aiming to automate the location mapping process in the future. The core strategy involves a two-step verification process to ensure accuracy: - **Fuzzy Matching with AI Embeddings:** The initial match uses AI and embeddings to calculate the similarity between service addresses and location addresses, successfully catching differences in abbreviations like "ST" for "Street". - **Number Verification Check:** A secondary, crucial check verifies that the first numbers (e.g., the street number) in the addresses also match, preventing incorrect mappings for locations on the same street but with different numbers. The immediate next step is a manual review of the generated file, which includes a column flagging matches where the first numbers do not align. The long-term goal is to integrate this refined logic directly into the platform to enable automatic mapping upon data ingestion, eliminating the need for manual intervention. ### Error Resolution Progress and Next Steps Significant progress has been made on resolving a series of other data errors, with several tasks nearing completion and one remaining in progress. - **Completed Tasks:** Files for error codes **2015**, **2016**, and **2024** have been prepared and are ready for review. These pertain to issues with prior balances and total charges. - **Pending Task:** The resolution for error **2027**, related to power factor and load factor, is still in progress and requires further work. Following the review and approval of the 3018 mapping file, the approved matches will be processed. A new, more current data backup will be used to generate the final mapping file to ensure data accuracy. ### Billing Overlap and Service Date Logic The meeting addressed a critical procedure for handling situations where a mock bill is followed by an actual bill with overlapping service dates and identical amounts. The agreed-upon logic is designed to prevent duplicate payments and ensure proper reconciliation. - **Cancellation Block Placement:** The cancellation bill block must be applied to the **actual bill** that follows the mock bill, not to the mock bill itself. This corrects a potential flaw in the initial approach that could have caused reconciliation issues. - **System Safeguard:** A critical system improvement was identified: bills that have already been sent for payment **must be locked** and should not be deletable. This prevents the accidental deletion of bills that are already in the payment processing pipeline. - **Potential for Duplicate Payments:** Concerns were raised about the possibility of duplicate payments occurring in the system, with at least one specific instance identified for further investigation. The effectiveness of the current query used to identify these overlapping bills was also questioned and will require further data verification. ### Access Requests A separate administrative action item was noted regarding user access. There are three pending requests for system access for new team members, which will be formally submitted and addressed in the upcoming development sprint.
Daily Priorities
## LED Report Process and Data Sources The meeting began with a detailed walkthrough of the process for generating the LED report, which is used for prioritizing mock bill submissions. The report is initially pulled from a specific system location, focusing on billing accounts with the "AP" or "bill pay" status. This data is then exported into a pre-formatted Excel template where calculations and summaries are performed. A critical dependency is that the customer package information in this report is fed from HubSpot, and its accuracy is paramount; inconsistencies arise when this data is not updated promptly by the CSMs. - **Data Pull and HubSpot Integration:** The process starts by exporting the "Last Bills by Billing Account" report, which relies on accurate package information from HubSpot: this is a known point of failure, as delays in HubSpot updates can lead to accounts being missed in the workflow. - **Excel Template and Prioritization:** The raw export is dropped into a specialized Excel template that summarizes accounts based on expected bill arrival and due dates: this summary is the primary tool for the team to prioritize which accounts to work on, with specific customers like Victra flagged as high priority. - **Platform Discrepancy:** A key distinction was clarified between the data in UBM (used for this report) and Data Services: the UBM data reflects when a bill physically arrives in the system, which is the trigger for the mock bill workflow, whereas Data Services uses a different logic, leading to potential discrepancies in what is considered "available." ## System Outage and Data Integrity Follow-up A system outage was acknowledged, which had generated some confusion internally due to a lack of clear communication. The team also discussed a backlog of data integrity issues, specifically concerning address mapping. A plan was established to review and approve updated files that would correct approximately 1,400 billing errors, with the expectation that this would resolve the integrity checks that had been flagged the previous day. ## Mock Bill Reconciliation and Cancellation Logic A significant portion of the discussion focused on the process and potential risks associated with mock bill reconciliation, especially concerning "cancellation bill blocks." The manual reconciliation process was confirmed, with a specific concern raised about a recent bulk processing effort that may have inadvertently sent through mock bills that should have been blocked. - **Manual Reconciliation Process:** The team manually identifies and archives bills with cancellation blocks, a process that was performed on a recent Friday, but there is lingering uncertainty about whether all ineligible bills were successfully filtered out. - **Cancellation Bill Block Mechanics:** The logic of a cancellation bill block was explained in detail: it is applied to the actual, follow-up bill and contains negative charges that negate the identical service period and amounts found on the previously paid mock bill, ensuring the net charge is zero and the bill is not sent for payment. - **Risk of Over-consolidation:** A concern was noted that if a cancellation block is incorrectly applied to a mock bill instead of the actual bill, it could distort financial analysis by masking the payment data. ## Priorities and Workflow Assignments The immediate priorities for the team were outlined, focusing on clearing the backlog of errors and continuing with standard client work. The primary goal is to approve the corrected data files to resolve the 1,400 billing errors. For client work, the team will concentrate on processing bills for Victra and Extension, with clear rules for each. - **Victra and Extension Processing:** For Victra, the team follows standard prioritization rules, while for Extension, the rule is to only pay current charges, removing any prior balances. Bills dated October 31st or earlier are to be flagged and moved to a projects queue. - **Error Code 5002 Handling:** Bills triggering error code 5002, which indicates total charges exceed a $24,000 threshold, require explicit approval from the client before they can be processed. - **Team Resources:** A new CSM has started and requires platform access, and the co-op team is being tasked with researching the top 30 utilities for the Victra client. ## Address Mapping Validation and Improvements The initial results from an automated address mapping process were reviewed. While the first pass using "first word match" and fuzzy logic showed promise, it was clear that manual validation is still necessary to ensure accuracy, especially for clients with complex addressing like multi-unit buildings. - **Initial Mapping Output:** The current export lists addresses where the first word of the location name is unique, providing a list of high-confidence matches, but this is just the first step. - **Need for Customer Context:** To facilitate faster and more accurate manual review, it was agreed that future exports must include the customer name alongside the address and customer number: this context is crucial for reviewers to spot potential mismatches that an algorithm might miss, such as matching different units in the same building for a client where that would be incorrect. - **Next Steps:** The plan is to refine the export to include customer names and continue refining the fuzzy logic, with a follow-up meeting scheduled to dive deeper into the mapping logic with the relevant developers.
Status Report Alignment
## Summary ### File Exchange Updates and Power BI Report The file exchange changes have been successfully deployed to production, with the corresponding Power BI report also completed. The report requires minor updates, including the addition of a new column as requested. Once these final adjustments are made, the report will be shared in the designated team channel for review. ### Non-EDI Invoice Processing Script A script for processing non-EDI invoices is currently being refined. The primary focus is on modifying the script to generate PDF files named according to a specific format, incorporating the vendor name and account number. A secondary goal is to optimize the script to process invoices in batches of 100, which would significantly improve efficiency over the current method. However, a strategic decision was made to postpone these optimizations to prioritize other development tasks. The final destination for the generated PDFs is still to be confirmed. ### System Status Report Recreation A significant effort is underway to recreate the system status report, shifting its underlying metric from "meters" to "bills." The current logic for counting workflow steps was analyzed and found to be complex; for some steps, it counts unique bills, while for others, it pulls a meter count from the bills table. The initial objective is to rebuild the entire report to consistently use bill counts. Future enhancements will involve adding more status columns to provide a clearer and more detailed view of each bill's progress. A separate user interface for bills with unassigned clients was also proposed to better handle this data subset. ### Telerik UI Version Installation The team is facing a blocker regarding the installation of a specific older version (6.2.0) of Telerik UI for Blazor, which is required for the project. The current developer accounts only permit downloading the latest version. To resolve this, the team is actively seeking assistance through official support channels and internal contacts to obtain the necessary permissions or offline documentation for the required version. ### Future Work and Process Integration Looking ahead, the next major focus will be on the LED report. The plan is to begin by recreating this existing SSRS report within the Power BI environment. A key process improvement was highlighted: using the JIRA integration within Microsoft Teams to create and link issues directly from conversation threads. This will help maintain a clear audit trail of requests and their origins. The team was also instructed that all upcoming work, including the system status report, should be developed in Power BI for the time being, rather than as a web application, to streamline development.
Simon Script
## Summary The meeting primarily focused on the analysis and consolidation of invoice lifecycle reports from multiple systems to address late payment issues and improve tracking accuracy across platforms. A significant portion was dedicated to planning the creation of a combined report that would provide a complete view of the invoice journey. ### Invoice Lifecycle Reporting Status An update was provided on the current status of invoice lifecycle reports, confirming that one report had been completed and shared for review. The discussion centered around ensuring all necessary parties were aware of the completion while awaiting final feedback from an unavailable colleague. The report in question appears to be foundational for understanding the current state of invoice processing. ### Data Access and File Sharing Logistics Practical steps were taken to facilitate collaboration, including downloading and sharing a large Excel file containing data on items missing image links. This file, noted to be approximately 66 megabytes and containing around 500 entries, was identified as a point for future investigation. The team coordinated to ensure all members had access to the necessary data files through Slack, emphasizing the importance of having the underlying data available for manipulation and analysis. ### Combining Data Services and UBM Reports A core objective discussed was the integration of invoice lifecycle data from two primary systems: Data Services (DS) and UBM. The goal is to create a unified view that tracks an invoice from creation through to payment, identifying at which stage delays occur. Currently, these systems operate with different status definitions and tracking mechanisms, creating blind spots in the process. - **Understanding Delay Causes:** The combined report aims to pinpoint whether delays happen during data download, processing within DS, issue resolution in UBM, or due to client-side funding problems. This is crucial for accurately assigning responsibility for late fees. - **Business Imperative:** The urgency for this combined report is driven by a real-world incident where the company faced financial liability for late fees that were partially caused by a client's delayed funding, highlighting a critical gap in current tracking capabilities. ### Technical Implementation and Access Requirements The conversation detailed the technical and administrative steps required to build the combined report. A key hurdle identified was securing the appropriate data access permissions for the team member tasked with the integration. - **Platform Decision:** It was decided that the final combined report would likely reside within the UBM system to ensure broader accessibility for stakeholders, including other teams who would need to view the data without facing access barriers. - **Access Coordination:** A plan was formulated to contact system administrators and DevOps personnel to grant the necessary access privileges to the underlying UBM databases, not just the pre-built Power BI reports. ### Next Steps and Immediate Actions The meeting concluded with a clear action plan to advance the project. A message will be sent to the relevant system owners to request data access, enabling the immediate start of work on the combined report. The team committed to a follow-up discussion to review progress and refine the approach based on the accessed data. The assigned individual will begin familiarizing themselves with the UBM data structure to expedite the development of the combined reporting view.
Report
## Report Requirements for Abhinavan The meeting focused on clarifying the specific data and details needed for a report requested by Abhinavan, which centers on processed invoice information rather than forecasts. Key elements include tracking invoice dates, due dates, and lateness metrics to provide insights into client payment behaviors. - **Invoice processing details**: The report should capture data from downloaded processed invoices, including the invoice date and due date, to calculate how many days late payments are, which helps in identifying delays and improving cash flow management. - **Client-specific insights**: Information must be organized by client account, enabling a granular view of payment patterns and potential issues per client, which supports targeted follow-ups and relationship management. - **Exclusion of forecast data**: Unlike other reports, this does not involve projected units or expected due dates, emphasizing actual historical data for accuracy in assessing current performance. ## Distinction from Lab Reports A clear differentiation was made between the invoice report and lab reports, with the latter focusing on forecasted metrics rather than actual processed data. This distinction ensures that the right type of analysis is applied for decision-making. - **Lab report purpose**: Lab reports provide forecasts on expected units and due dates, serving as a predictive tool for planning and resource allocation, whereas the invoice report is retrospective and based on real transaction data. - **Focus on current performance**: The discussion highlighted that Abhinavan's request is specifically about analyzing existing invoices for lateness and processing timelines, which aligns with operational reviews rather than forward-looking strategies. - **Clarification from previous mentions**: It was noted that while lab reports had been discussed earlier for forecast-related insights, the current priority is on actionable data from processed invoices to address immediate concerns. ## Implementation and Task Assignment Actions were defined to execute the report creation, with a focus on task tracking and completion to ensure timely delivery. - **Task creation and tracking**: A task will be created in a management system (e.g., MGR) to formalize the work, providing a clear timeline and accountability for producing the invoice report, which streamlines project management. - **Resource readiness**: Confirmation was given that all necessary data and tools are available to generate the report, minimizing potential delays and ensuring a smooth implementation process. - **Progress updates**: Regular communication will be maintained to report on task completion, allowing for adjustments and ensuring alignment with stakeholder expectations. ## Future Discussions and Access The need for additional conversations was identified, particularly regarding lab report access or related systems, to address unresolved questions. - **Upcoming dialogue on lab reports**: A separate discussion is planned to cover aspects of lab reports, such as access permissions or data retrieval methods, which could complement the current invoice analysis. - **Meeting rescheduling**: While a previous meeting was canceled, a new one will be set up to dive deeper into lab-related topics, ensuring that all report types are adequately addressed in future collaborations. - **Integration of insights**: These discussions aim to bridge gaps between forecast and actual data, potentially enhancing overall reporting accuracy and utility for stakeholders. ## Meeting Logistics and Follow-up Logistical aspects of the meeting were briefly addressed, including cancellations and plans for future interactions to maintain momentum. - **Cancellation acknowledgment**: The meeting that was canceled did not hinder progress, as key decisions were made in this session, demonstrating flexibility in adapting to scheduling changes. - **Scheduling forward steps**: Intentions to set up another meeting at a later time were confirmed, emphasizing continuous engagement to tackle pending items and ensure all report requirements are met efficiently.
Product Forum - CA and UBM
## Summary The meeting served as an introduction and deep-dive into the two primary products within the Navigator platform: **Utility Bill Management (UBM)** and **Carbon Accounting**. The discussion centered on understanding the products, their target audiences, current marketing challenges, and opportunities for enhancing product communication and promotion. ### Product Overview and Team Introductions The session began with introductions from the product team members, who provided background on their roles and the products they oversee. The **Utility Bill Management (UBM)** platform is a tool that helps customers manage utility bill payments and analyze utility spend across multiple sites. The **Carbon Accounting** application enables commercial and industrial customers to measure their emissions, set sustainability goals, and track their progress. ### Target Personas and Use Cases A clear distinction was drawn between the typical users of each product, highlighting that they are usually different individuals within the same organization. The **Carbon Accounting** persona is often a sustainability director or manager focused on calculating, reporting, and meeting greenhouse gas emission targets. In contrast, the **UBM** persona is typically an energy manager, facility manager, or procurement specialist whose primary concern is the operational task of paying bills on time, ensuring accuracy, and gaining visibility into utility spending. ### Industry Focus and Customer Pain Points The discussion revealed how the adoption of each product is influenced by different industry dynamics and customer priorities. **Carbon Accounting** sees traction in sectors like food and heavy industrials, where companies face pressure from large partners like Walmart or Amazon to report emissions, and from large enterprises like Comcast or Pepsi that have public, C-suite mandated sustainability targets to uphold. Conversely, **UBM** is most successfully sold to **multi-site customers** with complex utility bill management needs, where the value proposition is a tangible reduction in operational expenses (OPEX) by identifying overpayments and avoiding late fees. ### Current Feature Release and Communication Strategy The product teams operate on a roughly **quarterly release cadence** for new features, though these are often enhancements to existing functionality rather than entirely new products. The current communication strategy for announcing these updates was detailed: - **Carbon Accounting** utilizes an internal and customer-facing process involving an **email template with GIFs and links** to a dedicated support article that explains the new feature. - A significant challenge identified is the need to develop a cohesive strategy for communicating product updates across different customer segments, particularly distinguishing between standalone Navigator clients and those who are retail power/gas customers now gaining access via the CEP (Constellation Energy Platform) integration. ### Marketing and Promotion Opportunities Several key opportunities for marketing to better support the products were explored: - **Website Enhancements:** A strong desire was expressed to create a **public-facing "Release Notes" or "Updates" page** on the website, similar to competitors like Watershed, to showcase new features and build credibility. - **Webinars:** There is a significant opportunity to develop a webinar program, both live and on-demand, to educate potential customers. This could include partnering with existing industry vendors and leveraging webinars for **content syndication** to generate leads. - **Lead Nurturing:** The idea of creating an **email sign-up on the website** for potential customers to stay informed about Navigator platform updates was proposed as a way to build a marketing database. - **Sales Enablement:** Ensuring that internal sales teams, including Vendor Managers (VMs), are fully aware of new features and equipped with the right messaging to promote them effectively was highlighted as a critical need. ### Competitor Analysis and Future Collaboration The team analyzed a competitor's website to understand best practices for presenting product updates. Looking ahead, there was agreement to **align product and marketing roadmaps** for the upcoming year to identify quick wins and ensure marketing strategies are prepared for upcoming product launches and initiatives.
LAB Report
## Database Cleanup Progress Significant progress has been made on cleaning up the DB2 column, with the team focusing on a conservative and methodical approach. The work has involved bypassing problematic areas to make headway, and the majority of the remaining work is now concentrated in the "unmapped bucket." A new logic is being developed to resolve these unmapped entries, but the team is proceeding cautiously after discovering some initial flaws. The goal is to resolve a high percentage (90%+) of these entries reliably, rather than pushing a fix that only handles a small portion. Historically, a customer-by-customer review via spreadsheet was sometimes more effective for ensuring logical sense, and the current strategy involves reviewing the most frequent issues and tackling them systematically. ## Lab Report Integration Initiative A key priority is the creation of a combined lab report view that unifies data from different sources, specifically Data Services (DS) and UBM. Historically, the team moved away from using the Data Services report in favor of the UBM one, but this has created challenges as the data does not always match. The immediate plan is to schedule a meeting with the primary experts, Tim and Afton, to align on the correct data source and logic. This unification is critical because discrepancies between the two reports indicate underlying data problems that need to be resolved. ## Operational Challenges and Process Adjustments The meeting highlighted several ongoing operational hurdles and the need for short-term manual fixes. A new, streamlined approval process is being established for these fixes, involving a small group to review and sign off on changes to ensure compliance, especially with an upcoming compliance season. The manual processes, particularly for the lab report, have been time-consuming and difficult to automate, with an estimated 50% of the work requiring human intervention. The team is also dealing with onboarding delays for new personnel, such as a new CSM who has received equipment but is waiting for a laptop, which is slowing down the delegation of tasks. ## Transitioning Work to the DSS Platform A major strategic goal discussed is the transition of operational work from the legacy DDS system to the new DSS platform. There is a pressing need for at least one operator to begin working directly within DSS to provide feedback and help build out its functionality. Currently, there are hundreds of bills in DSS that require pre-audit, and the team cannot effectively address the system's shortcomings without real-world usage. The plan is to encourage Afton to take on this role, as she has shown interest in learning the new system and has been asking for fixes. The transition will be gradual, focusing on moving what can be handled in DSS to reduce the manual load in DDS, even if it's a small percentage at a time. The ultimate objective is to reduce dependency on the legacy system, though it is acknowledged that some complex tasks will remain in DDS for the foreseeable future. ## Strategic Alignment and Weekly Planning To improve coordination, a recurring meeting will be established with Data Services, specifically involving Afton, to align on weekly priorities. This is necessary because there is currently no formal process for setting these priorities, leading to potential misalignment and delays. The goal is to create a consistent feedback loop where the operations team can communicate its needs to the development team, ensuring that necessary features for DSS are built. This regular communication is seen as vital for making steady progress on the transition and for managing the backlog of work effectively.
Navigator UBM - Security Review - Follow-up Series
## Summary The meeting primarily focused on reviewing the current technical design for the Navigator project, with the goal of achieving alignment and ensuring it adheres to organizational standards before proceeding. Key discussion points revolved around validating the overall design, addressing specific non-standard components, and resolving several outstanding technical questions. ### Design Review and Alignment The core objective was to confirm that the proposed design is acceptable and compliant with security and infrastructure standards, assuming all components are technically feasible. - **General Design Approval:** The design was presented as the latest iteration, with the explicit goal of reaching a state that can be considered the final "home" for the Navigator project. - **Compliance with Standards:** A significant part of the discussion involved ensuring the design fits within the organization's security and environment standards, with an acknowledgment that some items would require documentation as exceptions. ### Documented Exceptions and Non-Standard Components It was acknowledged that the current design includes elements that do not perfectly align with standard practices, and a plan was formulated to manage these deviations. - **Reporting Access Exception:** A specific exception for reporting access was identified and agreed upon. The plan is to formally document this deviation and accept it for the purposes of this specific project. - **Tracking Deviations:** The team emphasized the importance of tracking these non-standard items to maintain oversight and accountability for the design decisions. ### Access and Feedback on Design Documents A practical hurdle involved accessing the latest design documents, which momentarily slowed the review process. - **Access Issues:** Immediate access to the shared design document was not available for all participants, requiring a request for permissions to be submitted during the call. - **Incorporation of Feedback:** It was noted that previous feedback had been provided on the design, with the expectation that it would be incorporated, though a final review of the updated document was pending for some attendees. ### Outstanding Technical Questions Two significant technical questions from a previous meeting were revisited, as they remained unresolved and critical for the final design. - **Scheduled Process Implementation:** The design's suggestion to use a Logic App for running scheduled reports from Okta was flagged as a potential issue. The concern is that this service may not be supported within the organization's standard infrastructure, prompting a discussion on alternative methods like cron jobs or other approved scheduling mechanisms. - **Integration Runtime VM for Power BI:** The purpose and necessity of an "Integration Runtime VM" shown in the diagram were unclear to several participants. This component is apparently related to enabling a Power BI integration with the project's database, but the specific connection method and its justification required further investigation. ### Path Forward and Next Steps The conversation concluded by outlining the immediate next steps to resolve the open items and solidify the design. - **Follow-up on Technical Questions:** The team committed to following up on the two main technical questions regarding the Logic App and the Integration Runtime VM to determine viable, standards-compliant solutions. - **Operational Guidance:** Once the design is approved, the plan is to provide the application team with a clear set of operational guidelines and runbooks, as this environment is new to them. - **Best Practices Review:** It was noted that a separate review of best practices for the platform-as-a-service (PaaS) constructs used in the application is also pending.
DSS Workflow Fixes
## Summary The meeting focused on addressing critical issues within the Data Services System (DSS) and its interaction with the legacy system (DDIS), with the primary goal of streamlining workflows and reducing manual intervention. Discussions revolved around specific error types, output failures, and proposed solutions to enhance system efficiency and data integrity. ### DSS Audit Queue and Error Handling The DSS audit queue is designed to flag bills requiring operator intervention, but several improvements were identified to make it more effective. The system currently halts bills with validation errors, such as invalid units of measure or service item descriptions, but the process could be optimized by redirecting certain errors automatically. - **Redirecting unit of measure errors to Account Setup:** This would leverage existing vendor and account-level notes stored in DDIS, allowing for faster resolution without manual operator searches in DSS. - **Addressing invalid observation types:** A recurring issue was identified where DSS sometimes creates its own, invalid observation types (e.g., "non fuel charge charge") instead of using the correct, system-recognized ones (e.g., "charge PR"). These errors are not always visible to the team until the bill fails later in the process. - **Enhancing communication with notes:** When bills are rerouted from DSS to other workflows like Account Setup, it was requested that a note be automatically appended explaining the reason for the rerouting. This would prevent confusion and save time for operators who currently have to investigate the cause manually. - **Rerouting logic for complex bills:** The logic for handling bills over six pages was discussed. Currently, these should be kicked out of DSS, but they are sometimes automatically picked up again, creating a loop. A proposal was made to send any bill with an existing client, vendor, and account number directly to the data audit queue instead. ### Output Service and Data Capture Issues Significant problems were reported with the output service, where data captured in DSS leads to failures when sent to the downstream UBM system. These issues often result in entire batches being rejected, necessitating manual reprocessing of all bills in the batch. - **Broken batches from blank meter lines:** A major cause of batch failures occurs with deregulated accounts and zero-usage bills. The system captures service dates on the supply meter but no usage or charges, resulting in a blank line on the output file that the UBM system cannot ingest. - **Re-emergence of a previously fixed issue:** This problem with supply meters had been resolved but has recently resurfaced, leading to a high volume of failed batches, including for new clients, indicating a regression in the system. - **Proposed solution of splitting batches:** To mitigate the impact, it was suggested that batches be broken into individual bills for output. This would ensure that only the problematic bill fails, rather than causing the entire batch to be rejected. ### Observation Type and Data Integrity Problems Beyond the output service, critical data integrity issues were highlighted where observation types and units of measure are being incorrectly altered during processing, leading to bills getting stuck. - **Blank or incorrect observation types:** Bills are being captured and sent to "ready to send" with blank observation types or observation types that do not exist in the system, forcing teams to manually pull them back and investigate. - **Incorrect unit of measure assignment:** A specific issue was described where the unit of measure on a default meter is changed to dollar signs, creating a cost line with no usage data. This also causes bills to fail downstream. - **Manual workaround and its limitations:** The current fix for these data issues involves manually using the "clear data" button, which reverts the units of measure and allows for reprocessing. However, this is a time-consuming and reactive solution that highlights a need for a more robust fix within DSS. ### Proposed System Improvements and Next Steps The discussion concluded with a focus on short-term fixes and longer-term system enhancements to address the root causes of these recurring problems. - **Short-term data routing with context:** An immediate action item is to configure DSS to push bills with specific errors, like unit of measure issues, to DDIS (Account Setup). Crucially, this will include sending detailed notes that identify the problematic meter by commodity type and meter number, along with the specific error message. - **Targeting key pain points:** The team committed to investigating several high-priority issues, including blank observation types, the "do not use" meter list, and the root cause of the broken batches from blank supply meter lines. - **Long-term vision for DSS:** The ultimate goal is to fully defang the legacy output service by having DSS handle output natively, thereby eliminating reliance on the legacy system and its associated failure points. Ensuring the DSS pre-audit queue is effective is a critical step toward this goal.
UBM Planning
## Summary This meeting centered on addressing critical issues with a billing report system following the departure of a key team member. The discussion focused on understanding the current workflow, identifying data accuracy problems, and determining the necessary steps to create a reliable stopgap solution for customers. ### Purpose of the Billing Report The primary function of the report is to identify bills that are imminently due but are not yet present within the UBM system. It projects the next expected invoice and due dates based on the most recent bill information available in UBM for a given billing account, which is a combination of vendor and clean account number. The system is designed to flag accounts where a payment is expected soon, triggering a specific internal process. ### Data Accuracy and Processing Concerns A significant challenge identified is the report's reliance on data within UBM, which does not account for bills stuck in external processing stages or those not yet collected from vendor portals. The accuracy of the report has been compromised because it previously benefited from a human element that manually verified and corrected data, a step that is now missing. This has created a gap in ensuring customers have a clear and accurate view of their invoice status. ### The Lab Team's Workflow and Responsibilities A dedicated team, referred to as the "lab," is responsible for acting on the report's alerts. Their core task involves finding due amounts on customer portals and submitting mock bills for payment to prevent missed deadlines. This is a straightforward process designed to be executed without requiring deep system expertise. Furthermore, this team plays a crucial role in identifying data discrepancies during their review, such as discovering closed accounts or new account numbers, which they flag for administrative follow-up. ### Issues with Account Status Filtering A specific technical problem was highlighted concerning the filtering of open and closed accounts. A miscommunication in the report's filter logic has resulted in closed accounts appearing on the actionable list. The intended filter was for the billing account status, but it was incorrectly applied to the virtual account data. This forces the lab team to manually filter out accounts that are already known to be closed, adding an unnecessary step and potential for error to their workflow. ### Systemic Challenges and Next Steps The conversation concluded that the problems are multifaceted, involving both the report's configuration and potential underlying issues in the broader billing system. The core question is whether to fix the existing report to make it more helpful, resolve a deeper system issue that would consequently fix the report, or completely rethink the report's purpose and design. The immediate plan is to implement a short-term stopgap, but a more comprehensive solution is required to address the root causes of the data synchronization and accuracy challenges.
DSS Daily Status
## Summary The meeting focused on identifying and analyzing the various methods operators use to upload invoices, with the primary goal of determining which methods are currently being tracked and which are not. The discussion centered on creating a comprehensive plan to implement tracking for all identified upload channels. ### Current Tracking Status The meeting began by establishing a baseline of the current invoice upload tracking capabilities. It was confirmed that only one method is actively being tracked at present. - **Web Downloads in FTG Connect:** This is the sole method where the upload of individual bills is currently being tracked by the system. ### Identified Untracked Upload Methods A significant portion of the conversation was dedicated to listing and understanding the invoice upload methods that currently lack tracking. Several key channels were identified as gaps in the current monitoring system. - **File Exchange in FTG Connect:** This method allows users to upload multiple invoices at once and is a notable gap in tracking that needs to be addressed. - **Direct DSS Upload:** Uploading invoices directly through the DSS (Data Submission System) interface is another common method that is not currently tracked. - **FTP Share/File Host:** Invoices submitted via an FTP (File Transfer Protocol) share or file host are processed by a "File Watcher" system, which currently attributes the uploads generically and does not track the specific user. - **Email Import:** Invoices received via email are also processed by the "File Watcher," resulting in a similar lack of user-specific tracking data. ### Implementation Considerations and Challenges The discussion then moved to the practical aspects and potential obstacles involved in implementing tracking for the untracked methods. The ease of implementation and the challenge of user attribution were key points. - **Attribution Complexity for Automated Methods:** A significant challenge identified is attributing uploads from automated sources like the FTP share and email imports. These are currently flagged under a generic "File Watcher" name, making it difficult to identify the actual operator responsible for the invoice. - **Varying Levels of Implementation Effort:** The team assessed that implementing tracking for the **File Exchange in FTG Connect** would be a relatively straightforward task. Similarly, tracking for **email imports** was considered feasible by potentially linking uploads to the sender's email address. ### Next Steps and Recommendations To move forward systematically, a clear action plan was formulated to address the tracking gaps. The approach involves breaking down the work into manageable tasks. - **Creation of Discrete Development Tickets:** It was decided to create separate, specific tickets for each untracked upload method: one for the File Exchange, one for the FTP share, one for DSS uploads, and one for email imports. This will allow the engineering team to address each channel methodically. - **Prioritization of the Easiest Solution:** The team plans to start implementation immediately with the method deemed the easiest to track-**File Exchange in FTG Connect**-to deliver a quick win and establish tracking for a bulk upload channel. - **Interim User Guidance:** A potential short-term recommendation is to advise users who need to perform bulk uploads to utilize the File Exchange method in FTG Connect, as it will be the first to have tracking capabilities restored.
LAB Report
## Customer The customer operates within accounts payable and utility bill management, relying on automated systems to process and pay a high volume of invoices for multiple clients. Their role involves ensuring payments are made on time to avoid late fees, which requires clear visibility into the entire payment lifecycle, from invoice ingestion to the final funding of payments by clients. Their background suggests a deep operational involvement in troubleshooting payment discrepancies and managing client expectations regarding payment performance. ## Success The most significant success identified is the development of a **manual workaround process** to create a daily "missing bills" report. This custom report was engineered to provide clients with a more digestible and actionable view of overdue payments than the standard, system-generated lab report. Despite being a manual and labor-intensive process, this solution has become critical for maintaining a dialogue with key clients, offering them a semblance of control and visibility into payment statuses that the core platform currently lacks. It serves as a vital stopgap for customer communication and trust-building. ## Challenge The single biggest challenge is a **profound lack of data visibility and system reliability**, creating a cascade of operational issues. A critical data field-the date a client funds a payment-is not being correctly pulled from the payment processor's API into internal reporting tools. This missing data point makes it impossible to determine responsibility for late fees, as there is no way to distinguish between delays caused by internal processing and those caused by the client's own funding timeline. Furthermore, the primary internal tool for identifying overdue bills, the "lab report," is fundamentally untrustworthy. Its inaccuracies force teams to rely on inefficient, manual workarounds and vendor-based bill pulling instead of a targeted, priority-driven approach, leading to missed payments and eroding client confidence. ## Goals The customer's objectives are centered on achieving clarity, accuracy, and efficiency in their bill pay operations. Their key goals are: - **Gain Clear Visibility into Payment Funding:** The foremost goal is to have the "funded date" from the payment processor reliably integrated into their dashboard and reports. This is essential for accurately assigning responsibility for late fees and defending against erroneous client claims. - **Establish a Trustworthy Missing Bills Report:** They need to either fix the existing lab report or build a new, accurate query that reliably identifies which bills are genuinely missing and require immediate action. This report is crucial for both internal operations and client-facing communications. - **Automate and Eliminate Manual Workarounds:** A core goal is to move away from fragile, manual reporting processes. The aim is to have system-driven, automated solutions that are scalable and reliable, freeing up valuable resources from daily firefighting. - **Enable Efficient Dispute Resolution:** They require readily available data to provide "ammunition" for customer success teams, allowing them to have factual conversations with clients about late payment responsibilities, thereby protecting revenue and managing relationships. - **Rethink the Core Bill Acquisition Process:** Ultimately, there is a goal to fundamentally reassess and improve the entire process of identifying and acquiring bills, moving beyond a system that perpetuates uncertainty and requires constant manual intervention.
Report Requirements Alignment
## **Summary** The meeting primarily focused on significant challenges and unresolved questions regarding the development of custom reports for a client, Simon. A central theme was the gap between the client's expectations, established during the sales process, and the current state of requirements gathering and development. The team lacks the detailed specifications needed to begin building the reports, leading to a strategy session on how to proactively gather the necessary information and reset timelines. ### **Core Challenge: Delayed Custom Reports** The team is behind on developing several custom reports promised to the client, with a major issue being the lack of clear, field-level specifications. The client's expectation was that work would begin shortly after the contract was signed in July, but development could not start without detailed requirements that have not yet been provided. This has created a risk to the project timeline and client satisfaction. ### **Analysis of the Required Report Deliverables** The discussion identified four key reporting deliverables expected by the client. However, the exact nature and data sources for some remain ambiguous. - **AP (Accounts Payable) Files:** This is the highest priority and consists of two formats: a text file (TXT) and an Excel file (XLS). While some initial testing has occurred, development is stalled because the mapping of each column in the client's example files to specific data fields within the team's platform (UBM) is undefined. - **CET File:** This report appears to be similar to an accrual report but includes additional columns for taxes and late charges. Similar to the AP files, the source of the data for these specific columns is unknown. - **Bill Details Report:** This is understood to be a replication of a very detailed report from the client's previous provider (NG), containing comprehensive invoice information. The team is confident in their ability to build a report of this nature but requires clarity on the specific data mappings. ### **The Critical Block: Unclear Data Mapping and System Configuration** A significant portion of the meeting was dedicated to the fundamental blocker: the team does not know how to populate the data for the reports. The client has provided example files but no "translation key" for their data fields. - **Field Mapping Unknowns:** For nearly every column in the provided report examples, it is unclear what the corresponding data attribute is within the team's system. For instance, a "Vendor Name" column could be a standard field or require a custom vendor attribute, and this distinction drastically changes the development approach. - **Dependency on Client Setup:** The reports cannot be finalized until the client's account, locations, and vendors are fully configured within the platform with the correct attributes. The team cannot make assumptions about these configurations and requires explicit direction from the client. - **GL Allocation Complexity:** The reports appear to require General Ledger (GL) allocation data, a feature that was not fully developed at the time of the sale. While the platform can now handle this, it introduces another layer of complexity that needs to be precisely defined by the client. ### **Onboarding Complications and Data Integrity** A separate but related issue concerns the data provided for onboarding the client's accounts. The team is struggling with an inconsistent account list where it is difficult to programmatically distinguish between EDI and non-EDI accounts, and some accounts are missing essential bill image links. This onboarding delay has a cascading effect, as the reports depend on this data being correctly ingested and structured within the system. ### **Path Forward and Strategic Resolution** The team concluded that a reactive approach is no longer viable and agreed on a multi-step, proactive strategy to gain control of the situation. - **Internal Alignment:** The immediate next step is an internal meeting to consolidate all existing information from the sales process and document a precise list of known requirements versus open questions. The goal is to avoid approaching the client with a "blank sheet of paper." - **Engage Client Leadership:** The team plans to schedule a meeting with the client's main point of contact, Drew, to realign on expectations, clarify the history of commitments, and identify the correct individuals on the client's side who can provide the missing field definitions and mapping. - **Reset Development Timeline:** A clear, new timeline for report development will be established only after the complete set of requirements is gathered and validated. The team emphasized that they cannot commit to deadlines while fundamental questions about the data remain unanswered. - **Process Improvement:** A broader concern was raised about the need for a better handoff process from sales to delivery to prevent similar situations in the future, where custom work is promised without a clear mechanism for requirement gathering post-contract signing.
Daily Priorities
## Summary ### Current Bill Processing Status and Data Volume The meeting opened with a discussion on the current state of bill processing, revealing a significant volume of active cases. After filtering out mock data, the team is working with a dataset of **2,657 bills** from both the "build pay" and "AP" categories. A new "processing times details report" is now the primary tool for the team to work from, serving as a crucial stopgap measure to help focus efforts on legitimate, non-mock bills. ### Challenges with Mock Data and Account Freezes A major operational challenge identified is the prevalence of mock bills and the subsequent freezing of user accounts. When a mock bill is processed, it often freezes the associated account, requiring manual intervention to unfreeze it once the actual bill arrives. This has created a significant bottleneck. The team highlighted the need for a mass unfreeze utility or loader, as they currently lack the permissions to perform this action themselves and must rely on a specific individual (Ruben) to execute these changes. The situation is particularly acute with a specific new customer, Ascension, where all accounts were frozen pending balance reviews. ### Process Improvements and Automation Opportunities The conversation centered on identifying areas for process improvement and potential automation. A key request was for access to existing guides on how operators handle errors within the system (UDM). This would serve a dual purpose: helping to understand current procedures and identifying tasks that could be automated. The team is actively seeking ways to reduce noise in the system, such as by filtering bills by vendor to exclude certain cases and by having team members pre-identify accounts that are safe to unfreeze. ### Compliance and Audit Preparedness Significant concerns were raised regarding compliance, specifically related to an upcoming SOC audit. The team expressed apprehension about passing the next review cycle, citing a lack of evidence for some actions and historical instances of bills not being actioned for months. To address this moving forward, a new approval process has been established for certain fixes, requiring sign-off from designated team members. All related communication and tracking are now being centralized into a single channel ("UBM daily fixes") to ensure a proper audit trail and simplify future compliance reporting. ### Strategic Decisions on Bill Resolution A critical strategic discussion took place regarding the prioritization of bill payments versus perfect data integrity. Due to pressure to avoid service disconnections and get payments out the door, a decision was made to force some bills with minor data verification issues through the system. This approach acknowledges that it will likely create more work later, requiring extensive research and look-back to answer questions about potential double payments or other discrepancies. The team recognizes this trade-off, accepting that back-end reconciliation efforts will increase in the next 15-30 days in exchange for ensuring timely payments now.
Daily Priorities
## Daily Charges and Bill Resolution The meeting focused on the daily monitoring and resolution of total charges, with an emphasis on ensuring that all bills are properly handled and updated. A shared sheet was referenced as the primary tool for tracking these charges, allowing team members to quickly access necessary information without repeated inquiries. However, it was noted that the total charges daily resolve process might already be functional, but further verification is required. - **Review of total charges**: The current system involves checking a shared sheet daily to monitor charges, but there was uncertainty about whether the resolution process is fully operational, prompting a need for data validation. - **Bill count and historical handling**: There are 102 bills that need attention, and historically, initial phases were managed by specific individuals, but going forward, a more structured approval process is necessary to maintain compliance. ## Approval Process for Bill Edits To streamline bill edits and ensure compliance with SOC processes, a new approval workflow was established using a dedicated group in Microsoft Teams. This approach centralizes the approval process, requiring sign-off from key stakeholders before any changes are made to the bills. - **Centralized approval group**: A chat group involving Tim, Tara, and others will be used to approve daily changes made by the team, ensuring that all edits are reviewed and authorized to avoid future compliance issues. - **Handling bill downloads**: While files can be downloaded from Teams, it was acknowledged that this process can be cumbersome, so the group will serve as a hub for sharing and approving lists of bills that require fixes. ## Fixing Past Due Amounts Progress was reported on addressing past due amounts, with a recent list of 25 bills successfully fixed. The team emphasized the importance of continuing this effort to maintain financial accuracy and resolve errors promptly. - **Recent fixes and file sharing**: A file sent the previous day contained 25 bills that were corrected, highlighting the ongoing effort to tackle past due amounts and ensure data integrity. - **Interim logic clarity**: The logic for handling these fixes-prioritizing first, second, and then last steps-was confirmed to be clear, although it had not been implemented yet due to other priorities. ## Work Progress and Priorities The discussion highlighted current work priorities, including the focus on processing bills for immediate payment files and addressing any backlog. It was noted that while some tasks are clear, resource constraints have delayed progress on certain items. - **Payment file processing**: Attention was given to processing bills for Victor to include them in today's payment file, demonstrating a shift in focus to urgent financial tasks. - **Delayed interim logic implementation**: Although the interim logic for bill fixes is understood, it hasn't been acted upon yet, as efforts were redirected to more pressing issues like fixing specific bills overnight. ## Data Integrity and Verification Issues Concerns were raised about increasing numbers in data verification and integrity check statuses, indicating potential issues with bill data that need investigation. The team discussed methods for monitoring these metrics and identifying root causes. - **Monitoring integrity checks**: It was explained that integrity check statuses can be filtered in the system to view live builds, but the rising numbers in data verification 1 and 2 suggest underlying problems that require further analysis. - **Plans for investigation**: The team plans to double-check the data and share updated files if numbers change, with a commitment to ongoing monitoring and collaboration to resolve these integrity issues efficiently.
Report
## Summary ### Power BI Access and Licensing Access to Power BI was successfully provisioned for key team members, marking a significant step forward in data reporting capabilities. The necessary licenses have been acquired and assigned, enabling the team to begin working with the platform. The focus now shifts to ensuring the correct administrative access levels are granted so that team members can not only view reports but also examine the underlying data sources and structures. This foundational work is critical for the team to become self-sufficient in creating and modifying reports. ### Auto-Archiving Feature Development Substantial progress was made on the auto-archiving functionality, a feature designed to improve data organization and system performance. The core user interface element for this feature has been completed, with the "show/hide" button being dynamically renamed to "archive" across the application, pulling its label directly from the Cosmos database. The next development phase involves creating and implementing a background job that will automatically archive records meeting specific criteria: a workflow status and a build status both marked as "complete" for a duration of 30 days. This duration is considered a starting point and may be adjusted to 14 days in the future based on system performance and user needs. ### Performance Optimization for Legacy System Queries A significant technical challenge was identified concerning the performance of data filters that query the legacy system. While most filters operate without issue, applying a filter for the "complete" status results in timeouts and bad request errors due to the immense volume of historical data in the legacy database. The current method requires fetching all records that match the filter from the legacy system before cross-referencing them with other systems, a process that is not feasible for large datasets. The team is actively exploring optimization strategies, including the possibility of changing how certain status values are sourced to reduce the dependency on the legacy system for these complex, cross-system queries. ### System Status Reporting Initiative A high-priority initiative is underway to replicate and understand the existing "System Status New" report. The goal is to empower the team to build and maintain such reports independently. To achieve this, team members are being equipped with the necessary tools and access, including the Power BI Report Builder, which allows for the download and inspection of report definition files. Understanding the underlying data is a key focus, with team members connecting to the legacy SQL database to examine the specific stored procedures and views that power the report. This deep dive is essential for replicating the report's functionality and ensuring the team has full ownership over its data reporting tools. ### Data Source Analysis and Access Management A major part of the meeting was dedicated to the practical steps of accessing and analyzing the data sources for reports. The team confirmed that the "System Status New" report is a paginated report that draws its data primarily from the legacy database via specific stored procedures. Gaining the correct administrative access to Power BI was a key hurdle that was overcome, allowing team members to download the report files and connect directly to the underlying SQL Server. This hands-on access is vital for the team to trace the origin of each data field and understand the complete data pipeline from the database to the final report output.
Sprint Kickoff Updates
## Summary ### New Task Assignments Three new tickets have been created and assigned, with clear instructions and resources provided for their completion. - **Power BI Report Recreation**: The first and highest priority task involves recreating an existing Power BI report. Comprehensive details have been provided in Jira, supplemented by an overview video to assist with the task. To facilitate this work, direct contact with colleagues in Romania has been authorized for real-time assistance during their business day. - **Additional Assigned Tasks**: Two other tickets have also been assigned and are considered clear and ready for work to begin. These tasks are available in the "to do" column and are expected to provide sufficient workload. - **Telerik License Status**: A Telerik license is still being procured and is not yet available for use. This is noted as an upcoming resource, but its current unavailability is not expected to impede progress on the newly assigned work.
Power BI Report Review
## Summary ### Introduction to the Power BI Report The meeting centered on the analysis and understanding of a specific Power BI report named "system status dash new report," which is accessible through the data services portal. The primary objective is to gain a comprehensive understanding of this report's data structure and to identify areas for enhancement, particularly concerning the visibility of client information for certain bills. The report is a critical tool for monitoring system status and contains multiple data views that require detailed investigation. ### Understanding the Report's Data Structure: Bills vs. Meters A fundamental concept discussed was the distinction between "bills" and "meters" within the system's data model. A bill is a primary entity that can contain one or multiple meters, which act as the fundamental building blocks of a bill. For instance, a single bill might be composed of several individual meter entries. This relationship is crucial because the current "system status dash new report" presents data from two different perspectives: one view is based on meters, and another is based on bills. This results in a discrepancy in counts; for example, a sample view showed 20 bills but 92,104 meters, clearly illustrating that meters are a more granular and numerous subset of the bill data. ### Objectives for Report Analysis and Enhancement The analysis has several key goals. The immediate task is to thoroughly investigate the existing report to understand the meaning and purpose of each data column. Following this understanding, the next step is to recreate the current meter-based view from the perspective of bills, which will likely involve a significant reorganization of the data columns. Furthermore, there are plans to potentially expand upon these columns as their functions become clearer, with guidance provided throughout this process. ### Addressing the Issue of "Unknown" Bills A significant data visibility issue was identified concerning bills with unknown client information. Currently, there are bills in the system for which the client name is not known, and this critical information is not displayed anywhere in the existing report. This lack of visibility can cause bills to become stuck in various stages of the data workflow-such as in DSS (Data Staging Services) or the "audit data" stage-without being easily traceable because they lack an associated vendor or client name. A key initiative is to find a way to surface these "unknown" bills within the report to improve tracking and resolution. ### Clarifying the Data Workflow The discussion touched upon the ideal data workflow, although it was noted that the current sequence in the report is not entirely accurate or sequential. The intended flow begins with **data acquisition**, where data first enters the system. From there, it progresses to **DSS**, which is likely a data processing or staging layer. After DSS, data either moves to **account setup** or to **setup audit** if manual review is required. Finally, data that is cleared moves to a **ready to send** status. It is during the early stages of this workflow, such as in data acquisition or DSS, that vendor and client names should ideally be identified, but this is not always happening as intended. ### Access and Resources for the Task To carry out this analysis, access to specific tools and data sources is required. If there are any issues accessing the Power BI service itself, the appropriate contacts are Mercha and Andre on Slack. Mercha can assist with Power BI licensing access, while Andre can provide the necessary permissions to access the underlying data that feeds into the "system status dash new report." Ensuring this access is a prerequisite for beginning the investigative work.
URA Additional Customer's Review
## Summary ### Business Growth and Sales Initiatives A significant focus is on accelerating revenue growth for the currently low-revenue company, with a major emphasis on sales activities. A recent large sales opportunity was pursued, which resulted in a promising long-term lead: a potential client expressed strong interest but is locked into an existing two-year partnership. They have invited further discussions and suggested that if certain additional capabilities were developed, they could start a partnership immediately. This underscores the critical importance of securing customer contracts, as all other efforts are deemed secondary until sales are finalized. ### Operational and Technical Challenges The discussion reveals several ongoing operational hurdles that are impacting productivity and project timelines. - **Internal Process Bottlenecks**: Significant time is being lost due to internal processes and bureaucratic hurdles, particularly when trying to start new tasks or procure resources. - **Power BI Licensing**: A specific, immediate challenge involved purchasing new Power BI licenses, which was complicated by an expired session and a request for a purchase order number that was not typically required. A workaround was successfully implemented to complete the purchase. - **Third-Party Vendor Management**: For additional licenses, coordination with an external vendor (Telark) is required. It was confirmed that these licenses are managed separately from the main corporate IT environment and do not require additional internal approvals. ### Infrastructure and Security Migration A complex and delayed project to migrate the technology infrastructure to the main corporate environment ("Constellation") was discussed in detail. The goal is to establish a fully functional development environment where applications, databases, and queries can operate seamlessly. - **Current State and Delays**: The migration is far behind schedule, with an estimated completion now targeted for May. The process has been slowed by unforeseen technical complications, such as separate virtual networks and a multi-layered approval process for tickets. - **Security Concerns**: The current setup of the web application firewall (WAF) is largely ineffective; it merely proxies encrypted traffic without inspecting it, which negates its primary security purpose. This raises questions about the value of the current security implementation. - **Deployment Strategy**: The plan is to first get the basic infrastructure approved by security and other teams. Only after this foundation is stable will engineers be able to begin deploying and testing code in the new environment, which is a prerequisite for meaningful development progress. ### Audit and Compliance Demands The company is facing pressure from both internal and external audits, which is consuming considerable resources. - **SOC 1 Audit**: The company is undergoing a SOC 1 audit, where auditors are demanding 100% proof for randomly selected invoices, creating a significant administrative burden. - **SOC 2 Commitments**: There is external customer pressure to achieve SOC 2 compliance, a promise that was made approximately a year ago. However, given the current workload and delays in other projects, achieving SOC 2 before the onboarding of a major new client at the start of Q2 is considered highly unlikely. ### Project Onboarding and Resource Allocation The onboarding process for new clients is a major pain point, directly linked to resource constraints and strategic decisions made in the past. - **History of the Data Services Solution (DSS)**: The transition to an AI-powered solution for data processing was initiated after promising early tests in February. The original plan involved a slow, methodical testing phase with dedicated personnel. However, this plan was derailed when the dedicated tester was reassigned, and other teams were too busy to assist. This led to a decision to push the system to "production" without adequate testing, which has contributed to ongoing instability. - **Resource Scarcity**: Historically, requests for additional engineering resources to handle the onboarding workload were frequently denied. This forced a shift in development philosophy from building robust, long-term solutions to creating quick, short-term fixes just to stay afloat. The current situation involves "pulling" resources from other projects without formal assignment to manage the workload. - **Business Strategy Tension**: A fundamental conflict exists between the need to invest in scaling the business (a "growth investment" phase typical of VC-backed companies) and the current reality of "pinching pennies." This creates unrealistic expectations about the pace of development and the stability of the platform. ### Financial and Administrative Tasks A brief mention was made of ongoing financial administrative duties, specifically the management of expense reports through the SAP Concur system. It was confirmed that this task is being handled, though the specific individual responsible was not identified during the conversation.
UBM Errors Check-in
## Summary The meeting focused on resolving a critical data discrepancy in client billing records, specifically concerning the "prior bill pass due amount" column that was causing system errors. A detailed, step-by-step analysis was conducted to identify the correct methodology for clearing these erroneous balances. ### Identifying the Core Data Discrepancy The primary objective was to pinpoint which data columns to compare in order to validate and subsequently clear out incorrect prior bill pass due amounts. The initial approach of subtracting the "prior bill amount paid" from the "prior bill amount due" was found to be flawed, as it primarily resulted in zero values and did not address the actual problem causing the system errors. ### Clarifying the Correct Column Comparison A pivotal clarification was made regarding the data relationship: the correct comparison should be between the **"Prior Bill Amount Paid"** (what was actually paid on the last bill) and the **"Past Due Amount"** on the *current* bill. This is because the error stems from the system treating an already-paid amount from a previous bill as still being overdue on the current bill. - **Correcting the comparison logic:** The initial logic was comparing data from the *same* historical bill, which was not useful. The solution involves comparing data from two different sources: the payment record of the last bill against the outstanding balance field of the current bill. - **Defining the clearance action:** When these two values are equal, it indicates that the amount shown as "past due" on the current bill has, in fact, already been paid. Therefore, the "Past Due Amount" field (Column E in the spreadsheet) should be zeroed out. ### Data Validation and Refinement Process A rigorous process was undertaken to apply the new logic and filter the dataset for actionable items. The team moved from a theoretical discussion to practical implementation within the spreadsheet. - **Applying the new formula:** A new column was created to calculate the difference between "Prior Bill Amount Paid" (Column L) and "Past Due Amount" (Column E). - **Filtering for actionable items:** The dataset was then filtered to exclude non-bill-pay customers and to focus on records where the calculated difference was zero, confirming they were candidates for clearance. This refined the list from a seemingly small number of entries to a more substantial and accurate count of **145 records** requiring correction. ### Finalizing the Action Plan and Next Steps With the correct methodology established and validated, a clear path forward was defined. The refined and filtered list of 145 client records will be sent for processing to clear the identified prior bill pass due amounts, thereby resolving the underlying system errors.
DSS Workflow Modularization
## Summary The meeting focused on strategic architectural planning for the Data Services System (DSS), primarily addressing the need to decouple client-specific logic to enable broader, multi-tenant usage across different applications. ### Core Problem: Coupled Logic and Scalability The central issue identified is that the current DSS contains a significant amount of logic that is specific to a single client, UBM. This design prevents the system from being easily opened up to other internal applications, such as Carbon Counting and Glimpse, or to external customers in the future. The goal is to separate this UBM-specific logic to create a more generic, scalable data extraction and processing service. ### Proposed Solution: An Interpreter Layer and Workflow Orchestration The proposed solution involves creating a flexible "interpreter" layer that can handle client-specific data massaging and translation. A key concept discussed was modeling this system using configurable "puzzle pieces" or building blocks that can be assembled into different workflows. - **Workflow-Based Architecture:** The idea is to move away from a single, fixed processing pipeline in DSS and instead support multiple, parallel workflows. Each workflow would be a unique sequence of processing steps tailored for a specific client or use case. - **Leveraging Existing DSS Foundation:** It was strongly suggested that this new capability should be built *within* the existing DSS framework rather than as a standalone service. This approach would leverage the existing asynchronous task processing, queuing systems, and infrastructure, making it faster to implement than creating an entirely new service from scratch. ### Technical Implementation and Scalability The discussion delved into how the proposed architecture would function and scale to meet future demands. - **Handling Multiple Inputs:** The system is envisioned to accept data from various sources, not just PDFs. It could also consume the structured JSON output from other processes, providing flexibility in how data is ingested. - **Scalability and Capacity Management:** A primary concern is scaling the AI processing capacity, currently a constraint due to OpenAI token limits. A proposed solution is to create separate workflows that utilize different AI models (e.g., GPT-3 vs. GPT-5), as each model has its own independent capacity pool, effectively multiplying available processing power. - **Priority and Queue Management:** The existing queuing system in DSS can be enhanced to manage workload priorities. The concept involves a "priority bucket" system that processes tasks based on predefined rules, ensuring high-priority work (like bill pay) is handled before lower-priority tasks. This would be a one-time setup per client or application. ### Onboarding and External Access The conversation covered the practicalities of allowing other applications and potential external users to consume the service. - **API and Documentation:** For internal Constellation Navigator applications, access would be provided through a curated API (e.g., a specific Swagger definition). It was noted that comprehensive onboarding documentation and a formalized API agreement, including error codes, need to be developed. - **Security and Multi-tenancy:** A significant point was that the current DSS UI lacks multi-tenancy, meaning all users can see all data. For true external access, a new, secure API gateway would be required to ensure data isolation, where external parties can only access and query their own data. ### Strategic Alignment and Next Steps The group aligned on the strategic direction and outlined preliminary next steps for the initiative. - **Path Forward:** The consensus was to proceed with enhancing DSS to support multiple, configurable workflows. This is seen as a more efficient path than building a separate service, despite the acknowledged benefits of isolation. - **Initial Actions:** The immediate next step involves high-level planning for the refactoring effort required to introduce the multi-workflow concept into DSS. This will be followed by an analysis to identify and separate UBM-specific logic from the core data extraction logic. - **Long-term Vision:** The ultimate goal is to transform DSS into a centralized, scalable platform for data extraction that can serve a wide array of internal and external customers, with client-specific needs handled through customizable workflow pipelines.
DSS/UBM sync
## Summary The meeting focused on developing a cohesive API strategy across three core applications: **Utility Bill Management (UBM/DSS)**, **Carbon Accounting**, and **Glimpse**. The primary goal is to create a unified, navigator-wide approach to APIs that enables seamless data flow between applications, reduces redundant work, and creates a scalable foundation for both internal and external users. The discussion centered on establishing common standards, defining ownership, and planning a phased implementation. ### API Strategy and Framework The core objective is to build a standardized API framework that serves both internal application integration and external customer needs. This involves creating a unified approach to documentation, authentication, and data normalization to prevent each application team from reinventing the wheel. A significant part of the strategy is to designate the DSS (Data Supply Service) as the de facto **"Navigator Data Lake"** for standardized utility bill data, from which other applications like Carbon Accounting and Glimpse can draw. This centralizes the source of truth for usage information and ensures consistency across the platform. - **Unified Documentation and Standards:** All API documentation will be centralized on **ReadMe.com** to ensure consistency and accessibility. The focus is on creating clear, well-documented endpoints for each application, with hyperlinks to avoid content duplication. - **Navigator-Wide vs. Application-Specific Elements:** High-level components like authentication, Terms of Service, and SLAs will be standardized across Navigator. In contrast, individual application teams will own their specific endpoints, timelines, and the data schemas they expose. - **Phased Implementation and Prioritization:** The initiative will follow a "crawl, walk, run" approach. The initial phase involves getting basic documentation and internal APIs functional, with a longer-term goal of having a fully operational, authenticated, and externally-facing API ecosystem by Q4 of the next year. ### Key APIs and Their Roles Three primary APIs were discussed, each serving a distinct function within the Navigator ecosystem but requiring interoperability. The conversation highlighted the need for these APIs to work together to create a seamless customer journey from data ingestion to analysis and action. - **Glimpse API:** This API is actively on the roadmap, with a V2 version in development. Its primary function is to identify reduction opportunities for energy usage. It will rely on data from other services, particularly usage information from UBM/DSS. - **UDM/DSS API:** This API has a dual purpose. For external customers, it would allow for the extraction of bill and usage data in a standardized format (e.g., JSON). Internally, the DSS component is positioned to become the central data lake, processing PDF bills and outputting a normalized JSON that all other applications can consume. - **Carbon Accounting API:** This API is already in use and is considered a higher priority due to existing customer conversations. It calculates emissions based on utility data and shares common data fields (e.g., fuel type, amount, unit, zip code) with the other applications, making standardization crucial. ### Data Standardization and the "Navigator Data Lake" A major theme was the need to standardize data formats and establish a single source of truth for core information like client names, buildings, and usage data. This is essential for enabling applications to communicate effectively and for facilitating customer upselling between products. - **DSS as the Central Data Lake:** The JSON output from the DSS service, which processes utility bills, is proposed as the foundational data layer for all of Navigator. This would contain all parsed bill data, and applications would query or pull from this central source. - **Common Data Fields:** Key fields like `fuel type`, `amount`, `unit of measure`, `zip code`, and `date ranges` were identified as common across APIs. Standardizing these definitions is a prerequisite for smooth interoperability. - **Client and Asset Mapping:** A future-state "joint customer API" or a centralized CRM system (leveraging HubSpot) is envisioned to harmonize client entities and their associated assets (buildings, meters) across applications. This would allow, for example, Glimpse to automatically pull all known locations for a client that already exists in UBM. ### Authentication and Technical Infrastructure The technical implementation of a unified authentication system was identified as a critical, complex challenge that needs a cross-platform solution. The current disparity in cloud providers (AWS, Azure, GCP) across different applications adds a significant layer of complexity. - **Cross-Cloud Authentication Challenge:** A major hurdle is creating a single authentication gateway that works seamlessly across applications hosted on different cloud platforms (AWS, Azure, GCP). Research is needed to determine if an existing gateway can be extended or if a new, cloud-agnostic solution is required. - **Crawl, Walk, Run for Auth:** The initial "crawl" phase may involve simple API keys or tokens without complex user-level permissions. The long-term "run" vision includes sophisticated, federated access control that manages user permissions at both the entity (client) and application level. - **External vs. Internal Access:** A clear distinction will be made between internal-facing APIs (for inter-application communication) and external-facing ones (for customer use). Internal APIs may not even be published on ReadMe to avoid confusion. ### Customer and Internal Use Cases The drivers for developing these APIs are both internal efficiency and external customer demand. The APIs will empower customers to self-serve their data needs and will streamline internal processes for onboarding and data sharing between product teams. - **External Customer Demand:** Customers, especially large ones, are increasingly requesting API access to avoid custom reporting burdens. For instance, UBM customers want JSON exports for their own reporting, and brokers may want to feed data directly into Carbon Accounting. - **Internal Efficiency:** APIs will eliminate manual, script-based data transfers between teams (e.g., from UBM to Carbon Accounting). This will make processes more scalable, reliable, and maintainable. For example, Glimpse could automatically pull usage data for all of a client's buildings without manual intervention. - **Upsell Opportunities:** A unified data layer is the technical foundation for the business strategy of upselling. If a client is onboarded in one application (e.g., UBM), the data is immediately available to seamlessly onboard them into another (e.g., Glimpse or Carbon Accounting), significantly reducing friction and time-to-value. ### Next Steps and Immediate Actions The meeting concluded with a set of concrete, immediate actions to advance the initiative. These focus on sharing information, creating initial documentation, and further researching key technical challenges. - **Share DSS JSON Schema:** The UBM/DSS team will share an example of the full JSON output from the DSS service with the Glimpse and Carbon Accounting teams. This will allow them to understand the available data and begin mapping what they need. - **Grant ReadMe Access:** All application teams will be given access to each other's ReadMe workspaces to foster transparency and awareness of ongoing developments. - **Document Initial Endpoints:** Each team will begin documenting their initial API endpoints on ReadMe, focusing on the core services they plan to expose, even if the full implementation is later. - **Research Authentication:** The Carbon Accounting team will share their findings on cross-cloud authentication, and the UBM team will evaluate the effort required to implement a suggested solution, including the potential impact of a future migration to Azure.
Plan for Arcadia
## Summary The discussion centered on the current operational challenges and strategic direction for the product, particularly concerning the bill pay feature and data ingestion processes. A significant theme was the tension between addressing daily, urgent operational fires and making progress on long-term, strategic improvements. There is a notable sense of progress, exemplified by a consistently reduced queue in the Data Services (DSS) system, but this achievement feels overshadowed by the sheer volume of other pressing issues. The conversation heavily focused on the potential integration of a vendor, Arcadia, which is seen as a critical path to alleviating major bottlenecks. Arcadia is expected to manage credential handling and provide both PDFs and JSON data for bills, thereby abstracting away a significant portion of the current operational debt and freeing up internal resources. A major strategic debate involves the future of the bill pay feature itself. The current process is encumbered by complex logic and validation rules within the UBM system (e.g., virtual account mapping) that are not essential for the core function of paying a bill. This creates frustration for both the team and customers. The idea of creating a simplified, standalone bill pay flow was proposed. This new flow would bypass non-essential UBM analytics logic, using only the bare minimum data required to initiate a payment, thereby decoupling payment urgency from data reconciliation tasks. Looking ahead, there is a clear need to define a firm roadmap for the upcoming year. This involves making official decisions on strategic priorities, such as whether to fully commit to or de-emphasize bill pay, and establishing clear timelines and outcomes. The goal is to move beyond a reactive state of daily prioritization calls and towards a more predictable and strategic operational model. ## Wins - **Reduced DSS Queue:** For the first time in a considerable period, the queue in the Data Services (DSS) system has been consistently maintained at under 100 bills for almost two weeks, a significant improvement from previous levels that were in the thousands. - **Progress on Operational Issues:** Active and daily work is being done to address specific, recurring errors within the UBM system, such as unmapped virtual locations and past due amount discrepancies. - **Strategic Clarity:** The discussion successfully identified and articulated the core problem: the artificial blocking of bill payments by non-essential UBM analytics logic, paving the way for a potential solution. ## Issues - **Constant Firefighting:** The team is stuck in a reactive cycle, spending a disproportionate amount of time on daily operational issues and urgent customer requests, which hinders progress on strategic projects. - **Bill Pay Process Bottlenecks:** The current bill pay feature is hampered by dependencies on complex UBM logic (like virtual account mapping and validation errors such as "3018") that are unrelated to the actual act of paying a bill, causing delays and customer dissatisfaction. - **Data Ingestion Challenges:** The initial step of the pipeline-reliably ingesting bills from vendors-remains a failure point, with external teams not performing adequately. - **Strategic Indecision:** A lack of an official, communicated decision on the long-term future of the bill pay feature leads to conflicting priorities and inefficiency. - **Resource Bandwidth:** The relentless focus on operational issues consumes bandwidth that would otherwise be dedicated to new feature development, such as the UBM roadmap items. ## Commitments - **Arcadia Integration:** A commitment to pursue the integration of Arcadia as a vendor to manage credential ingestion and bill data acquisition, with the goal of offloading a significant portion of operational burdens. - **Simplified Bill Pay Flow:** A commitment to design and develop a streamlined, standalone bill pay feature that operates independently of non-essential UBM analytics logic, focusing solely on the data required to make a payment. - **Strategic Roadmap Definition:** A commitment to establish a clear roadmap and set of committed outcomes for the upcoming year, including making firm decisions on product direction and setting timelines to transition out of a reactive operational mode.
EDI Processing Plan
## Summary The meeting focused on developing a strategy for processing a large volume of utility accounts, specifically separating Electronic Data Interchange (EDI) accounts from non-EDI accounts and establishing a workflow for automated bill retrieval. ### Data Structure and Initial Review The discussion began with a review of the provided spreadsheet containing over 8,800 utility accounts. A method for identifying EDI accounts was confirmed: filtering the master list using specific account numbers provided in a separate "EDI Vendors" tab. This will allow for the clear segregation of accounts that can be processed automatically versus those that require manual intervention or a different approach. ### Organizing the Master Spreadsheet A significant portion of the conversation was dedicated to organizing the master data file to prevent errors and streamline work for different teams. The agreed-upon structure includes: - **A locked "Master" tab:** This will serve as the single source of truth, containing the original, unaltered data received from the client to ensure no accidental modifications occur. - **Dedicated filtered tabs:** Separate tabs will be created for "EDI Accounts," "Non-EDI Accounts," and "Accounts with No URL." This allows teams like Data Services to work directly from the "Non-EDI Accounts" list without risk of corrupting the master data. - **A pivot table for prioritization:** A pivot table will be added to quickly identify the top utility vendors by the number of accounts. Processing bills in batches based on the largest vendors will enable the team to demonstrate quick progress to the client. ### Handling Incomplete Data The team identified a subset of approximately 563 accounts where the bill URL was listed as "Not Available." A multi-step plan was formulated to address this data gap: - **Initial isolation:** These accounts will be moved to a separate "No URL" tab within the master spreadsheet. - **Parallel investigation:** While the script runs for accounts with valid URLs, a separate effort will engage the client to determine why these URLs are missing. Potential reasons discussed include paper-only billing, summary accounts, or issues with the data export from the client's system. - **Fallback categorization:** These accounts will ultimately be treated similarly to EDI accounts, requiring a manual or alternative processing method. ### EDI and Non-EDI Processing Methodology The core of the project involves using an automated script to scrape bill data from vendor websites for non-EDI accounts. The execution plan for this script was detailed: - **Phased execution by vendor:** The script will be run in batches, prioritizing the top 10 vendors identified by the pivot table to maximize efficiency and early visible progress. - **Scheduled runs and verification:** To avoid impacting daytime work, the script will run overnight. A critical step will be a "sanity check" after each run to verify the number of bills successfully retrieved matches the number of accounts processed, with plans to debug any discrepancies. - **Legacy system ingestion:** It was confirmed that historically, even scraped bill data has required manual ingestion into the Data Services system by another team, a process that will continue for this project. ### Execution Plan and Next Steps The meeting concluded with a clear outline of immediate actions and timelines to move the project forward. - **Spreadsheet updates:** The master spreadsheet will be updated to include the new EDI column and the segregated tabs for EDI, non-EDI, and "No URL" accounts. - **Script testing and run:** Following the receipt of the updated spreadsheet, the automated script will be tested on a small batch of accounts from a single vendor. A full run targeting thousands of non-EDI accounts is planned for the near future. - **Client communication:** The issue of the accounts with missing URLs will be raised with the client's team to develop a resolution plan in parallel with the primary scraping effort.
DSS/UBM Errors
## Customer The customer is actively engaged in managing and optimizing utility billing operations, with a particular focus on data analysis and process automation. Their work involves deep analysis of billing data, Power BI reports, and developing automated logic for mapping utility bills to specific service locations. They are technically proficient, working directly with database queries and system logic to solve complex operational challenges related to utility bill processing and payment prioritization. ## Success The most significant achievement discussed is the development and refinement of an automated logic for mapping incoming utility bills to the correct service locations. This system is designed to intelligently parse service addresses by comparing key components like the first string, second string, and last string of an address against existing location records. This automated approach is crucial for streamlining operations, as it aims to correctly route a substantial portion of bills-potentially 30% to 50%-without manual intervention, thereby increasing efficiency and reducing the operational workload. ## Challenge The primary challenge centers on ensuring the accuracy of the automated mapping process to prevent incorrect assignments, which would severely impact downstream location analytics and financial reporting. A specific and critical issue identified is the occurrence of incorrect billing IDs associated with service addresses. In one case, a bill was processed with a completely different account number than what was on the physical bill, a discrepancy that could lead to payments being sent to the wrong utility account. This highlights a vulnerability where the system might auto-map a bill based on a correct service address but an incorrect billing ID, creating significant financial and data integrity risks. The root cause of these billing ID errors is suspected to lie in upstream data ingestion processes. ## Goals - **Enhance Automated Mapping Accuracy:** The overarching goal is to implement additional validation conditions within the automated mapping logic. This is to guard against incorrect mappings that would corrupt location-based analytics and require difficult remediation after a bill is processed. - **Improve Operational Prioritization:** A key objective is to empower the operations team to better prioritize bills that require actual payment. This involves enhancing Power BI reports with specific data columns-such as prior bill source, amount, and due date-to help filter out mock bills and overlapping bill errors that do not need payment, thus optimizing team capacity. - **Resolve Data Integrity Issues at the Source:** There is a critical need to investigate and rectify the root causes of incorrect billing IDs being associated with service addresses in the system, as this poses a direct threat to payment accuracy. - **Refine the Mapping Logic:** The goal is to finalize and implement a robust, multi-layered address matching logic that systematically checks the first, second, and last strings of a service address to minimize false positives and ensure only correct matches are automated.
Bill Template Cleanup
## Summary The meeting focused on addressing a backlog of **1,567 past-due bills** with due dates of November 16th and prior. A primary challenge identified was distinguishing between bills that have already been satisfied by "mock" payments and those that require actual payment, a process complicated by overlapping service dates and data integrity checks. The discussion centered on developing a systematic approach to clear this backlog, prioritizing key customers, and implementing both immediate and long-term solutions to prevent future occurrences. ### Backlog Analysis and Initial Priorities The core issue is a significant number of bills stuck in the system, requiring a triage approach to determine which need payment and which are duplicates of already-paid mock bills. - **Backlog Composition:** The 1,567 bills are a mix of legitimate unpaid bills and bills that are essentially duplicates of mock bills that have already been paid. The operational need is to separate these two groups efficiently. - **Customer Prioritization:** Initial priorities were set to focus on non-Banyan Bill Pay customers, with specific mentions of **Victra, Park National Bank (PNB), Medline, Ascension, and Sheets**. A particular urgency was expressed for customers where the company is liable for late fees. - **Data Verification Hurdle:** A substantial portion of the backlog (810 bills) is held up in "data verification," while others are failing an "integrity check," indicating potential data quality issues that must be resolved before processing. ### Operational Strategy for Bill Resolution The team devised a multi-pronged strategy to tackle the backlog, involving both automated scripts and manual review by the operations team. - **Filtering Mock Bills:** A critical operational need is the ability to filter out bills that already have a corresponding mock bill. This would allow the team to focus efforts exclusively on bills that genuinely require payment, significantly reducing the workload. - **Script-Based Solutions:** The use of automated scripts was discussed as a key method for bulk resolution. For instance, a script was run for Ascension, which appeared to have cleared a significant number of its bills, demonstrating the effectiveness of this approach. - **Manual Cleanup for Complex Cases:** Bills that fail integrity checks or have complex overlapping service date errors require manual intervention by the operations team to investigate and resolve the underlying data issues. ### Technical Solutions and System Enhancements To improve the efficiency of backlog management long-term, the discussion highlighted the need for enhanced reporting and data analysis capabilities within their system. - **Enhanced Power BI Reporting:** A plan was formulated to augment the existing Power BI report with new fields, specifically **"last bill source"** and **"last bill pay amount."** This enhancement would automatically identify bills that are follow-ups to mock payments, eliminating the need for manual lookups and saving considerable time. - **Immediate Workaround with VLOOKUP:** While waiting for the permanent Power BI solution, a temporary fix using spreadsheet functions like VLOOKUP was proposed to manually identify potential mock bill follow-ups, though this was acknowledged to be time-consuming and less accurate. - **Addressing Overlapping Service Dates:** A separate but related technical issue involves "overlapping service dates" which is preventing some bills from processing. A mapping exercise is underway to resolve this, which will systematically clear another category of errors. ### Customer-Specific Action Plans The conversation delved into specific action plans for high-priority customers, illustrating the tailored approach required for each. - **Victra Case Study:** For Victra, a detailed plan was created: approximately 194 bills in data verification would be auto-resolved and sent for payment, while the remaining 60+ bills stuck in integrity checks would be manually cleaned up by the operations team. An additional ~40 bills identified in a separate "lab" report were also being handled. - **Park National Bank (PNB) and Medline:** For these customers, the strategy differs because extensive mock bills have already been submitted. The focus here is on **canceling charges or zeroing out the current bills** since payment was already collected via the mock bills, preventing duplicate payments. - **High-Visibility Customers:** Other customers like PPG were noted as high-visibility, with urgent needs to avoid service disruption, emphasizing that the backlog has direct consequences on client relationships. ### Long-Term Process and Strategic Discussions The meeting concluded with forward-looking ideas on how to fundamentally improve the bill payment process to avoid such backlogs in the future. - **Streamlining Payment File Generation:** A strategic idea was proposed to decouple the payment process from the data cleanup process. The concept involves **generating a simplified payment file as soon as a bill hits the system**, ensuring timely payment while the bill continues through the necessary verification and analytics cleanup in the background. - **Focus on Preventing Duplicate Payments:** The long-term solution must ensure that the primary guardrail for the payment file is the prevention of duplicate payments, which is the main financial risk identified with the current mock-and-actual bill process. - **Resource Allocation for Manual Tasks:** The discussion acknowledged the need for continued manual effort on specific tasks like the "build block" process, with a plan to allocate appropriate resources to keep this work on track.
DSS Daily Status
## Summary ### DSS Product Performance and IPS Filter A critical issue was identified where recent changes to the IPS filter caused significant performance degradation in the production environment, leading to timeouts and a persistent loading screen. While the changes functioned correctly in the development and staging environments, they were rolled back in production to restore system stability and prevent user impact. Despite the interface issues, background data processing jobs continued to operate, with data creation processes confirming that only a minimal number of records were affected during the outage. ### Data Processing for EDI and Non-EDI Vendors A major initiative involves processing a large dataset of vendors, requiring a clear separation between EDI (Electronic Data Interchange) and non-EDI accounts. The plan is to handle these as separate batches, as EDI accounts present a more complex setup and are considered a lower priority "nice-to-have" compared to the essential non-EDI accounts. The primary method for differentiation will be filtering based on the "vendor" column in the provided data files. A key next step is to confirm the exact filtering criteria with the relevant team before proceeding with the data ingestion process, which is expected to be a resource-intensive task. ### PDF Generation Script Execution A separate task involves running a script to generate approximately 6000 PDFs. The execution of this script is considered a low-priority, background task due to its potential to consume significant CPU resources and time. The team decided it should be run after hours to avoid impacting daytime development work and other critical activities. ### Virtual Account Implementation and Data Guardrails Significant progress was reported on the virtual account feature, which is a high-priority item for resolving mapping location issues in UBM. Key developments include: - **Field Validation:** Ensuring that all six required data fields are being sent correctly for virtual accounts. - **Error Handling:** A strategy was defined for records with missing virtual account details, where they will be flagged as failures in the pre-audit stage. This will prompt operators to address the issues directly within the DSS system. - **User Interface Enhancements:** Pagination and a display status filter have been added to the Virtual Accounts table in the application's details page, improving usability. ### Team Updates and Miscellaneous Tasks The team provided updates on various other ongoing tasks: - **Query Optimization:** A query related to the IPS filter was successfully optimized and the changes have been deployed to the test environment. - **Data Display Issue:** An investigation is ongoing into a problem where IPS DSS data is not appearing in a system table, despite all underlying processes seeming to function correctly. - **Software Licensing:** There is a blocker in acquiring a necessary software license (Telerik) due to internal procurement processes, specifically the need for an official purchase order. A trial version is available but may not be compatible with the current software version in use.
Billing Error Review
## Summary The meeting focused on reviewing progress and addressing critical challenges in automated bill processing, with particular emphasis on error resolution strategies and fundamental concerns about location mapping logic. ### Bill Error Resolution Progress Significant progress has been made in resolving billing errors through automated adjustments. A total of 693 bills were initially identified with errors, and after implementing adjustments, 649 bills were successfully updated. However, 44 bills remain unresolved due to complex issues involving multiple locations or unmet conditions. The remaining errors represent cases where bills either have multiple location associations or other underlying problems that prevent automated resolution. The terminology for describing these fixes was clarified to use "resolved" rather than "fixed" to better reflect the nature of the corrections. ### Automation Strategy for Past Due Amounts A detailed discussion occurred regarding the automation of past due amount processing, comparing two potential approaches. The current manual process involves Terra exporting data and comparing columns to identify matching amounts between "prior bill amount due" and "prior bill amount paid." Two automation solutions were evaluated: - **Automated script implementation**: This would automatically compare the two columns and remove prior balances when amounts match, following the same logic currently performed manually - **Bulk upload functionality**: This would allow operators to upload spreadsheets with pre-processed data The analysis revealed that both solutions would require similar implementation effort, with neither offering significant time advantages over the other. The locally existing script was identified as the most immediately viable solution, though it would require adaptation for production use. ### Location Mapping Logic Concerns Serious concerns were raised about the fundamental location mapping logic after testing revealed multiple problematic scenarios. The current approach uses space-demarcated string matching between service addresses on bills and location records, which proved inadequate in several critical cases: - **Address formatting inconsistencies**: The logic failed with addresses like "901902" where no space separation exists, preventing proper matching - **Fractional address representation**: Cases where "1/2" on bills corresponds to "0.5" in location records create matching failures due to format discrepancies - **Multiple matching candidates**: The algorithm often identifies multiple potential location matches without sufficient criteria to determine the correct association - **Account number discrepancies**: Instances were found where billing IDs from PDF extraction don't match actual account numbers, preventing duplicate bill detection These issues suggest that the current matching approach may produce incorrect associations in a majority of cases rather than just edge scenarios, potentially compromising data integrity across the system. ### Error Prioritization and Resource Allocation The team established clear priorities for addressing the various error types, focusing resources on the most critical issues first. The current prioritization places high importance on resolving specific error codes (3018 and related issues) before addressing other identified problems. This structured approach ensures that development efforts concentrate on the errors with the greatest impact on system functionality and data accuracy. ### Handover and Operational Transition Preparations are underway to transition bill resolution responsibilities to Veronica, though the timeline remains flexible. The handover process involves more than simple task transfer, requiring updates to scripts and access permissions to ensure operational continuity. This transition represents a strategic move to distribute workload and build operational redundancy within the team. ### Data Integrity and Monitoring Challenges The discussion highlighted systemic concerns about data quality monitoring and issue detection. Several problematic bills had gone undetected through current monitoring processes, raising questions about the effectiveness of existing quality controls. The team identified the need for more robust validation mechanisms to catch mapping errors and data inconsistencies before they affect downstream processes and reporting accuracy.
Lawyer Meeting
## Summary ### Current Situation and Key Challenge The primary discussion revolves around a spouse who entered the U.S. on a B-2 visitor visa in late July, with her authorized stay set to expire at the end of January. The central objective is to secure a path to a green card for her, based on her marriage to a U.S. permanent resident. The core challenge is timing the necessary immigration filings to ensure she can maintain lawful status in the U.S. while her application is processed, avoiding any period of unlawful presence. ### Adjustment of Status Pathway Explained The recommended strategy involves a two-step process to adjust the spouse's status from a visitor to a permanent resident. This pathway is contingent on the monthly Visa Bulletin published by the U.S. Department of State. - **I-130 Petition (Petition for Alien Relative):** This is the first form that must be filed by the U.S. permanent resident spouse. It establishes the qualifying relationship and creates a "priority date." This petition can be filed at any time, but its filing date is crucial for the next step. - **I-485 Application (Application to Register Permanent Residence or Adjust Status):** This is the actual green card application. It can only be filed when the spouse's "priority date" becomes current according to the Visa Bulletin's "Dates for Filing" chart. Filing the I-485 is the critical action that allows an applicant to remain in the U.S. legally, even if their original visa expires, and to apply for work and travel permits. ### Critical Timing and the Visa Bulletin The entire strategy hinges on the dates published in the monthly Visa Bulletin, specifically for the F2A category (spouses of permanent residents). - **Immediate Opportunity:** The December Visa Bulletin was highly anticipated, as it was expected to show an advancement in the "filing date." If the new date is on or after the day the I-130 is filed, then the I-485 could be submitted immediately in December. - **The January Deadline:** The spouse's B-2 status expires at the end of January. To maintain lawful status, the I-485 *must* be filed before this expiration. If it is filed while she is still in status, she enters a period of authorized stay pending adjudication. If not, she would begin accruing unlawful presence. ### Weighing the Options and Associated Risks The conversation carefully analyzed different scenarios and their potential outcomes, each carrying distinct risks. - **Filing the I-130 Immediately:** This secures the earliest possible priority date and positions the couple to file the I-485 as soon as the dates allow in December. However, filing an immigrant petition (I-130) demonstrates immigrant intent, which could jeopardize any future applications for visitor visas or extensions and could lead to issues if she attempts to re-enter the U.S. after international travel. - **Filing for a B-2 Extension:** As a potential stopgap, the spouse could apply to extend her visitor status before it expires. This would keep her in a period of lawful presence while the extension is pending. The significant risk is that the extension could be denied, especially with a pending I-130, and this path does not itself lead to a green card. - **The Overstay Scenario:** It was noted that if the spouse were to overstay her visa and the primary sponsor later becomes a U.S. citizen, the overstay would be forgiven in the green card process. However, this course of action was explicitly not recommended due to the legal complications and extended period of unlawful presence it would entail. ### Strategic Recommendation and Next Steps The consensus was to proceed with filing the I-130 petition as soon as possible. This strategy is seen as having a "decent chance" of success, as it aims to get the I-485 filed before the end-of-January deadline. The expectation is that the December Visa Bulletin will be favorable. While not a guarantee, this is considered the most proactive and reasonable course of action given the circumstances. The associated legal fees were estimated at approximately $5,000, plus government filing fees. ### Ancillary Considerations Several secondary but important topics were clarified during the discussion. - **Work and Travel Permits:** Upon filing the I-485, the spouse becomes eligible to apply for an Employment Authorization Document (EAD) and Advance Parole (travel permit). While these can sometimes be issued within a few months, there is no guarantee, and sometimes the green card itself is adjudicated first. - **Impact of Childbirth:** If the spouse were to become pregnant during the process, the child would be a U.S. citizen by birth. This would serve as strong evidence of a bona fide marriage but would not directly expedite the spouse's own green card application until the child turns 21. - **Health Insurance and Marriage Certificate:** Adding the spouse to the sponsor's health insurance is not only permissible but can be viewed favorably as it demonstrates the sponsor's ability to provide financial support. A Saudi marriage certificate is fully recognized for U.S. immigration purposes.
Simon - review timelines and action items (internal)
## Customer The customer is a large-scale utility or energy management company responsible for managing thousands of metered accounts. Their role involves comprehensive data management and bill processing for a substantial portfolio, indicating a sophisticated operational background requiring robust and scalable solutions. ## Success The most significant achievement in the onboarding process has been the establishment of a **complete and confirmed data foundation**. The customer successfully provided a full data export, which includes detailed information for over 8,800 metered accounts. This dataset is richer than typically requested, containing not just account numbers but also crucial location data necessary for system mapping. This thorough data submission has effectively completed the "confirmed onboarding template" phase, providing a solid and high-quality starting point for the technical integration work. ## Challenge The single biggest challenge is the **complexity and dependency surrounding EDI (Electronic Data Interchange) accounts**. A significant portion of the account portfolio-approximately 2,500 accounts-are EDI-based, which introduces a major bottleneck. Accessing the setup bills for these accounts is contingent upon receiving a signed Letter of Authorization (LOA), a process that is currently outstanding. This dependency creates a critical path delay, as the 60-day timeline for portal setup and bill pulling for these EDI accounts cannot commence without the LOA. Furthermore, differentiating between EDI and non-EDI accounts within the master list is an immediate technical hurdle that needs to be resolved to enable parallel processing. ## Goals The customer's primary objectives for the product implementation are clear and multi-faceted: - **Achieve Full System Integration:** The overarching goal is to successfully onboard the entire account portfolio into the system. This includes the technical processes of data extraction, location setup, account mapping, and attribute loading within the target platform (UVM). - **Establish Automated Bill Pay and AP Integration:** A key business goal is to integrate the accounts with the bill pay and accounts payable (AP) systems. While this is a crucial deliverable, progress is currently on hold pending internal resourcing from the customer's side. - **Develop Customized Reporting:** The customer has expressed a need for specific, customized reports. Building these reports is considered a major project deliverable, though its exact timing relative to the go-live date is still to be finalized. - **Clarify and Manage Timelines:** There is a strong goal to establish realistic, clear, and mutually understood timelines for each stage of the onboarding process. This involves firming up dates and clearly communicating dependencies to manage expectations, especially concerning the impact of upcoming holidays and the EDI LOA process.
Ad-hoc meeting
[EXTERNAL]Story Walkthrough
## Summary The meeting focused on several high-priority technical updates and feature implementations, primarily concerning system attributes, report configurations, and payment file processing. ### Expanding Custom Attributes The initial step involves adding new account attributes and enabling their update, including via bulk operations. A significant development is that the underlying report configuration may not require direct modification. The plan is to first add the attributes and then create a separate story for the primary key (PK) integration and subsequent report testing. - **Dynamic Attribute Handling**: The system's cube configuration is designed to dynamically support more than the current 10 custom attributes. This suggests that adding new attributes should automatically be reflected in the reports without manual code changes, though this functionality requires verification through testing. - **Phased Implementation**: The work will be split into multiple development stories for better management. The first story will cover the addition of attributes and the bulk update function, while a subsequent story will handle the PK integration and the associated testing for both the main reports and a separate Power BI report. ### MED XL Tableau Report Filter Update A high-priority item was discussed concerning the MED XL Tableau report, which currently pulls all types of bills. The requirement is to modify its data source query to exclusively load "live" bills. - **Filtering at the Source**: Instead of applying a filter within the report, the change must be made to the underlying query itself to improve efficiency and accuracy. This will ensure the report ignores other bill types such as setup bills, project bills, and historical bills. - **Sprint Prioritization**: While there is some flexibility, this task is considered a high priority. The goal is to complete development within the current or the following sprint, with an expectation to finalize it by the next week at the latest. ### Emergency Payment File Synchronization The discussion centered on aligning the PC (payment confirmation) and AP (accounts payable) files for emergency payments, starting with the customer Victra. The objective is to generate a one-to-one correspondence between these files for each payment. - **Resolution of a Previous Issue**: It was clarified that a recent story, already tested and scheduled for production release, has addressed this exact requirement. The solution now generates a separate AP file for each emergency payment file, moving away from the previous cumulative file approach. - **Expansion to All Customers**: The successful logic implemented for Victra is now being extended to all other customers. While a priority order was suggested (PNB and Accention mentioned as next in line), the development is proceeding to apply the change across the entire customer base simultaneously. ### Addition of a New Commodity and Unit of Measure A new requirement involves adding "Biomass" as a new commodity to the system, along with its default unit of measure. - **Data Configuration**: This requires updates to two specific system tables: one for the new commodity and another for its associated unit of measure. Notably, no unit conversion rules are needed for this addition. - **Dependency on Sample Data**: A critical prerequisite for implementation is receiving a sample bill from the requester. This sample is necessary to determine the exact code and value for the commodity, as the system uses these details for accurate matching and processing.
Status and Next Steps
## Summary The meeting primarily focused on development progress updates across multiple projects, technical challenges encountered by the team, and the planning of new features and process improvements. ### Development Progress Updates The team provided updates on their recent work across several key areas, highlighting both completed tasks and ongoing development efforts. - **Apigee and Appsmith Development:** Progress was made on setting up the Apigee repository locally, with the Web API running successfully. However, a build error was encountered for the web app UI related to Telerik components. Concurrently, work continued on Appsmith to build a model for viewing virtual account details on the location details page. - **DSS Client-Specific Features:** Development was completed for adding client-specific observation types for Snipes and Ursula, pending deployment. An additional task was identified to remove specific units of measurement from the system based on a request documented in the DSS meeting chat. - **IPS Filter Ticket:** The IPS filter ticket was completed, including backend API changes and testing. A frontend issue was noted where the UI was not correctly rendering the dynamically fetched values, despite the API returning the correct data. ### Technical Challenges and Blockers Several technical impediments were discussed that are currently affecting development velocity and deployment processes. - **Telerik Component Version Conflict:** A significant blocker was identified where the project uses an older version of the Telerik UI components. The free trial account does not permit the download of these older versions, preventing the successful build of the web app. - **Azure Frontend Caching Issues:** A recurring issue with Azure caching frontend UI components was confirmed. This causes delays in seeing updated changes in lower environments and sometimes in production, requiring manual purging of the cache to reflect updates immediately. ### Planning and Scoping New Features Discussions were initiated to plan for significant architectural changes and to scope out a new automation script, setting the stage for future work. - **Interpreter Layer Architecture:** A key strategic initiative was introduced to refactor the DSS application. The plan is to move UBM-specific business logic out of DSS and into a new "interpreter layer." This would be complemented by developing additional integrations for data sources that feed into DSS, aiming for a more modular and scalable architecture. - **NG Script Automation:** A new requirement was presented for an "NG script" that automates the downloading and processing of account-related documents. The initial analysis suggests the script needs to handle a list of EDI accounts, download files (which may be JPEGs or PDFs), and potentially convert or process them, though the exact functionality and previous ownership of this process remain unclear. ### Process and Environment Discussions The conversation also covered operational aspects related to task management and environment stability. - **Task Prioritization and Jira:** A follow-up was requested on a task related to "efficiency" that is listed as a priority in Jira, indicating a need for clarity on its requirements or status. - **Production Deployment Reliability:** Concerns were raised about the reliability of seeing changes in production due to the Azure caching behavior, linking it to a past incident involving Azure downtime that affected the visibility of a deployed change. ### Next Steps and Follow-up Actions The meeting concluded with a plan for immediate follow-ups to address the discussed challenges and advance the new initiatives. - **Technical Deep Dives:** Scheduled a dedicated session to prepare for a discussion on the interpreter layer architecture. A separate sync was proposed to involve additional team members in understanding and potentially assisting with the NG script requirements. - **Issue Resolution:** Committed to investigating the Jira efficiency task and the Azure caching phenomenon further to provide the team with clearer paths forward on these blocking issues.
Billing Validation Plan
## Summary ### Power Factor and Load Factor Unit Errors A detailed discussion was held regarding an error for unsupported units of measure, specifically for power factor and load factor. The current plan involves both a short-term manual fix and a long-term automated solution integrated into the platform. - **Automated Fix in Development**: A plan is in place to modify the existing validation logic so that when a bill with an unsupported unit of measure for power factor or load factor is detected, the system will automatically correct the data instead of generating an error. A story for this implementation has already been created. - **Immediate Manual Resolution**: In the interim, a script is being prepared to manually fix the existing bills with this error. The volume is currently very low, with only six bills in production affected, which is why it was not prioritized higher. - **Long-term Strategy**: The ultimate goal is to implement the automated fix within the bill ingestion process to prevent these errors from occurring in the future and eliminate the need for manual script runs. ### Total Charges Logic Validation The team reviewed the process for validating and correcting total charges logic errors in the billing data. This involves running a script to identify and fix discrepancies. - **Script Refinements**: The script used for this validation has been updated to exclude bills from specific clients (Sheetz and Royal Farms) and to also exclude bills that have multiple bill blocks mapped to different locations or bills that are unmapped to locations entirely. - **Execution Plan**: The script was prepared for execution, and the intent is to eventually hand over the daily running of this script to another team member to ensure consistent monitoring and correction. ### Virtual Account Mapping for Error 3018 This issue represents the largest volume of errors within the integrity check and is therefore a top priority. The focus is on developing a robust logic to map virtual accounts correctly. - **Logic Development in Progress**: The core challenge is finalizing the mapping logic. Examples are being analyzed to ensure the logic is solid and will not create larger issues by incorrectly mapping accounts, which would be difficult to revert. - **Phased Implementation Approach**: There is a consensus to implement the logic in phases. Even if the initial logic only resolves a small percentage (e.g., 5%) of the errors with high certainty, it is considered a positive step. The scope and impact of the logic can be expanded incrementally. - **Interim Manual Process**: Until the automated logic is finalized and implemented, a manual process of mapping and fixing these errors using provided files will continue to reduce the backlog. ### Other Billing Error Resolutions Several other specific billing errors were discussed, with updates on their resolution status and ownership. - **Error 2024 (Patio Mount)**: A script for this error is complete and has been used to update several hundred bills. This task is ready to be handed over to another team member for daily execution. - **Overlapping Service Dates**: A new logic was proposed to identify and resolve scenarios where two bills have overlapping service dates. If one bill is a "mock" bill and the total charges match exactly, the system could automatically mark the mock bill as canceled. - **Script vs. Systemic Solution**: For this issue, a two-phased approach was suggested: first, creating a script for a short-term manual fix, and second, planning a long-term, systemic solution that automates the resolution during the bill ingestion process, being mindful of potential impacts on system performance. ### Approach to Automation and System Performance A key consideration across multiple issues was how to implement automated fixes without negatively affecting the platform's performance. - **Integration Point Discussion**: For long-term solutions, the team is evaluating whether to integrate data correction logic into the bill validation flow or as a separate step during the bill ingestion process. The latter option is being considered to avoid slowing down the bill edit and parsing functions. - **Balancing Act**: The primary goal is to implement scalable, automated fixes while ensuring that these new processes do not create performance bottlenecks or unintended consequences in the system.
UBM Q1 Roadmap
Plan for Arcadia
Report
## Summary The meeting focused on finalizing the Q4 work plan to be presented in an upcoming meeting, with particular emphasis on deliverables for November and December. Key decisions were made regarding the scope of work, the re-prioritization of certain items, and the addition of new initiatives to improve data flexibility and payment processes. ### November Deliverables The primary goal for November is to complete several high-priority items currently in development. - **Invoice Processing Monitoring:** The team is actively working on enhancing invoice processing capabilities. The scope of this work has expanded, but it is still expected to be completed within November. The focus is on tracking and optimization to increase processing capacity. - **AP Files Split:** A significant item involves splitting the AP files into two distinct categories. The development for **non-emergency AP files** is complete and the feature is currently in testing. The work on **emergency AP files** is ready for development but has not yet been picked up by the development team. ### December Deliverables and Scope Adjustment The December plan was reviewed and adjusted to ensure realism, with some items being pushed to the next quarter. - **Payment Data Flexibility:** A new initiative was added to increase the flexibility of data sent to the payment processor. The objective is to add placeholder attributes to the billing account table and the corresponding payment file, creating a structure that can be easily populated with specific data like tax IDs and bill access codes as needs are identified. This provides a buffer to handle ad-hoc field requests without requiring immediate development for each one. - **Credential Management Reporting:** Work continues on establishing a system to track billing account credentials. The core challenge is ensuring a reliable and automated sync of credential data into the central system. A key dependency is defining the exact requirements for the Power BI report that will display credential status and creation metrics. - **Report Development:** Several other reports are slated for December, including adding filters to an existing report and creating a new system status report. - **Deferred Items:** To manage the workload, certain items were deliberately moved out of December. The **integrity check errors (phase three)** and the development of the **interpreter layer** were identified as candidates to be pushed into Q1 of the next year, the latter due to potential complications from an upcoming integration with a new partner. ### Payment Process and Check Management The discussion explored ways to improve the overall payment process and manage outstanding checks more effectively. - **Check Cancellation Policy:** An analysis of check payment data revealed that a very small percentage of checks remain uncashed after 22 days. A proposal was made to establish a formal policy of canceling checks that hit this mark and converting those payments to electronic methods to avoid complications. - **Payment Monitoring Dashboards:** There is a need for new Power BI dashboards to provide visibility into uncashed checks. The desired views would track checks in transit per customer and flag any that exceed the defined threshold, enabling proactive management and communication with the payment processor.
Monitoring and Outputs
## Summary ### Initial Casual Conversation The meeting began with a brief, informal discussion about personal health and broader observations on food quality and modern diets. This included reflections on how health appreciation changes with age and a comparative analysis of food standards between the US and Europe, noting the prevalence of high-fructose corn syrup and added sugars in everyday products like yogurt. ### System Monitoring Initiative A primary focus was the development of a new system monitoring solution. The immediate goal is to track the evolution of NQ tokens over a 24-hour period to understand system capacity. - **Initial Implementation Approach:** The plan is to start with a simple, lightweight solution rather than a complex, third-party system. Options being considered include a script running as a cron job or a small Docker container hosting a simple API. - **Hosting and Integration:** The preference is to host this monitoring tool within the Azure ecosystem to maintain consistency with existing services. The tool would query the Cosmos DB and expose data on an endpoint for ingestion by services like Application Insights or Grafana. - **Next Steps and Queries:** The team will proceed by first gathering initial statistics. Based on this data, a decision will be made on whether to contact Microsoft about potentially raising service quotas, though initial reviews of the documentation suggest it may not be necessary. ### Data Synchronization and Output Issues Significant time was dedicated to troubleshooting data synchronization and processing failures between internal systems and the UBM platform. - **Process for Handling Failures:** The current procedure for failures on the internal side involves logging errors (due to unreliable SMTP) for a user to manually review, fix, and then re-run the output process. Failures originating on the UBM side are considered their responsibility to resolve. - **Specific Data Corruption Issues:** Two specific problems were identified where data is being incorrectly manipulated during the output process: - **Unsupported Unit of Measure:** Data captured correctly in the DSS is being altered to an unsupported unit when sent to UBM. - **Incorrect Charge Capture:** There is an issue where the total charges are being captured and sent instead of the current charges, leading to billing inaccuracies. - **Investigation Plan:** The team will gather all relevant information on these specific issues to begin a deep-dive investigation, with the goal of identifying the root cause within the output service. ### Administrative and Access Updates The meeting concluded with a brief update on administrative progress. Access to a Z key account was approved, and the next step is to connect and verify that the correct permissions are in place for monitoring OpenAI services.
UBM Error
## Summary The meeting primarily focused on addressing ongoing data processing and billing errors within the system, with a significant emphasis on resolving issues for specific high-priority clients and streamlining internal workflows for error management. ### Data Validation and Error Management The discussion centered on managing and resolving specific data validation errors that are currently relaxed for most clients but are causing significant issues for two key customers. - **Enabling Stricter Validations for Specific Clients**: A proposal was made to re-enable the "meter consumption more than 75% higher" validation errors (2502 and 2504) exclusively for Royal Farms and Sheets. These clients are experiencing substantial data issues on their bills, leading to extensive retroactive cleanup work. A potential temporary workaround involves modifying the system to resolve these errors for all clients *except* Royal Farms and Sheets, thereby allowing their bills to be flagged for review. - **Prioritization of Error Resolution**: The team currently prioritizes error resolution based on bill due dates across all clients. However, certain clients like Royal Farms and Sheets receive escalated attention due to the volume and impact of their issues. ### Batch Processing and System Failures Significant time was dedicated to understanding and addressing recurring batch processing failures that are currently blocking the workflow. - **Investigation of Batch Failures**: A critical issue was identified where batches containing up to a thousand bills are failing during processing. The primary error encountered is "commodity missing," but this may be a symptom of a larger problem, such as file corruption or incorrect data formatting, including the presence of empty lines in the data files. - **Resolution Process for Failures**: When a batch fails, notifications are sent to an external data group (Whitener Data Group), which is responsible for fixing the issue and resubmitting the batch. The internal team is seeking to better understand and streamline this process to prevent continuous errors rather than applying one-off fixes. ### System Improvements and Future Planning The conversation included plans for system enhancements and better documentation of ongoing efforts to manage errors. - **Transition to JSON Data Format**: A long-term goal is to move away from CSV files to JSON for data transmission between systems, which is anticipated to be a more robust and easier-to-manage format for both sending and consuming data. - **Documenting Error Resolution Workflows**: A new process is being established to document the last execution date of automated scripts that resolve common errors. This will provide a clear trail and help determine which errors remaining in the system require manual review by the operations team, ensuring nothing is overlooked. - **Task Reallocation and Focus**: To optimize team bandwidth, certain straightforward error resolution tasks (e.g., related to past due amounts and power factor calculations) are being transitioned from one developer to another. This will free up the primary developer to focus on more complex issues, specifically the 3018 mapping location error and system reconciliation tasks.
Review Mapping Location Logic
## Summary ### Address Matching Logic for Property Bills The discussion centered on refining the automated logic for matching incoming utility bills to specific property locations, using a bill for "65 Jeffrey Scope" as a primary example. The core of the debate was whether the initial matching step should apply universally or be restricted to common area properties. The current logic involves taking the first number from a bill's service address (e.g., "65") and matching it to the first location with that number. A key concern was that if both a common area and a tenant unit share the same initial number, this could lead to incorrect mappings. It was determined that the logic should proceed as designed; if a simple match fails, the system will progressively add more address components (like the street name and unit number) until a unique match is found or the bill is flagged for manual review. The conclusion was that the existing step-by-step matching process is sufficient for handling both unit and common area bills. ### Commodity Mismatch and Data Integrity A significant data integrity issue was identified concerning a bill from Ottawa Waterworks. The bill detail was incorrectly flagged as "natural gas" due to the presence of a "therm factor" and "meter multiplier" in the data, even though the vendor name strongly suggests it is a water bill. This misclassification poses a serious risk to data analytics, as consumption and cost data would be attributed to the wrong utility commodity. The conversation highlighted that automatically mapping such a bill would perpetuate this error silently. It was acknowledged that the system needs a mechanism to flag or stop the processing of bills where a new commodity appears for an existing billing account, prompting a manual confirmation of the commodity type to prevent corrupting historical data. ### Root Cause Analysis of New Virtual Accounts A major concern driving the analysis is the unexpectedly high volume of bills requiring manual mapping, which indicates that new virtual accounts are being created instead of bills flowing through to existing ones. This suggests underlying data inconsistencies that are not being resolved. For customers like Caliber, where account numbers should be stable, this is particularly problematic. A specific root cause was identified: extraneous spaces in the "service account ID" field, introduced either by an AI or OCR process during bill digitization. These spaces cause the system to treat "2118 26" and "211826" as two different accounts, leading to duplicate virtual account creation. The immediate action plan is to implement data cleansing to remove these spaces and then analyze the impact on the mapping volume for specific customers. ### Investigation Plan for Mapping Issues The team plans a targeted investigation to understand why new virtual accounts are being created so frequently. The focus will be on two customer segments: - **Stable Account Customers (e.g., Caliber):** For these customers, account numbers should not change frequently. The investigation will drill down to find patterns, such as the service ID spacing issue, that explain the recurring mapping requirements. - **Variable Account Customers (e.g., Victra):** These customers genuinely have more fluid account numbers due to factors like vacant units or e-commerce setups. The analysis here will distinguish between legitimate new accounts and those resulting from data inconsistencies. The goal is to avoid applying automated fixes that might mask genuine data problems, which could become much harder to correct later in the data pipeline. ### Overlap Error 3012 Handling A separate but related topic involved the procedure for handling "Overlap Error 3012," which often occurs with mock bills. The standard operating procedure is for operators to identify when a left bill is a mock and its total amount matches the current bill. In such cases, they add a bill block to the mock bill's ID to cancel out its charges, thereby resolving the overlap conflict. This process was confirmed for documentation and will be incorporated into the broader system scope after the location mapping issues are resolved.
Access and Troubleshooting
## Summary ### Access and Permissions Issues A significant portion of the discussion centered on resolving access and permissions problems within the Azure environment, specifically concerning the AI Foundry and related resource groups. The core issue identified was a lack of necessary permissions to view and interact with critical resources, which is preventing effective troubleshooting and oversight. - **Replicating full access permissions:** The proposed solution is to replicate a specific set of access permissions to ensure all required resource groups, including the one housing the models, are visible and manageable. - **Completing prerequisite training:** It was noted that obtaining elevated access might be contingent on completing certain, unspecified training modules, which is a necessary step for full account provisioning. ### Troubleshooting and Support for FDG Connect The instability of the FDG Connect service and the current process for addressing its issues were major topics. The existing method of simply restarting the service was deemed insufficient for resolving underlying problems. - **Inadequate "restart" requests:** The current practice of operators requesting service restarts was criticized for being a superficial fix that often misses the root cause of failures, which could be persistent bugs. - **Empowering on-site technical staff:** A plan was formulated to leverage on-site developers who possess deeper knowledge of the service. These individuals would be empowered to perform initial troubleshooting, gather detailed error information, and potentially implement quick fixes, thereby creating a more robust first line of defense. - **Developing a structured troubleshooting process:** There is a clear need to move away from ad-hoc solutions and establish a formal process where operators provide comprehensive error details, such as specific error codes and descriptions of the failure context, to enable effective diagnosis. ### Process Refinement and Communication The conversation highlighted a need to refine internal and external communication processes to better handle technical issues. The goal is to create clarity and efficiency in how problems are reported and resolved. - **Internal alignment on new procedures:** A meeting is planned to socialize the new troubleshooting and access protocols with the entire engineering team, ensuring everyone understands their role and the updated workflow for handling support tickets. - **Setting expectations with operators:** A parallel effort will involve communicating with operators to educate them on the information required for effective support, moving beyond simple restart requests to providing detailed incident reports. ### Infrastructure and Tooling Improvements To support the new troubleshooting paradigm, updates to the infrastructure and tooling are being implemented. These changes are aimed at providing the team with better control and visibility into the systems. - **Deployment of a restart pipeline:** A new automated pipeline is currently in a pull request stage; once deployed, it will allow authorized personnel to restart the FDG Connect service across different environments (Dev, Preview, Prod) directly, serving as a basic safeguard mechanism. - **Enhancing monitoring capabilities:** Efforts are underway to gain access to production monitoring data, specifically querying Cosmos DB, which will provide crucial insights into system performance and error tracking. ### Next Steps and Future Planning The discussion concluded with a forward-looking plan to solidify the discussed changes and ensure continuous improvement in system reliability and support efficiency. - **Immediate access provisioning:** The highest priority is to secure the necessary Azure permissions for key personnel to unblock current operational limitations. - **Synchronization on project status:** An upcoming meeting will provide a status update on the deployment of the new pipeline and the progress of enhancing monitoring capabilities. - **Defining a clear escalation path:** The refined process will formally establish a tiered support structure: basic issues can be handled by on-site developers using the new tools, while more complex problems will be escalated to specialized developers or infrastructure experts for a thorough investigation.
Arcadia Onboarding Plan
## Summary ### Initial Customer Prioritization for Vendor Transition The meeting focused on developing a strategy for transitioning customer credential management and bill download services to an external vendor, Arcadia. The primary objective is to prioritize which customers to migrate first, with a strong emphasis on mitigating risk and addressing immediate operational liabilities. - **Victra was identified as the top-priority customer** for the initial transition phase due to being a significant source of operational "noise" and having the highest potential for financial liability, including the risk of late fees and lost revenue. - **A phased rollout approach was proposed**, starting with a single customer like Victra to iron out the process before scaling to the remaining 14 high-priority "built-by" customers. This method aims to ensure a smoother rollout by resolving process issues on a smaller scale first. - **The long-term goal is to transition all ~200 customers**, but a detailed 3-month onboarding plan needs to be developed. This plan will outline the specific sequence and timeline for migrating the initial 15 customers and then the broader portfolio, providing clear expectations for the vendor. ### Credential Management and Vendor Processes A significant portion of the discussion centered on the mechanics of credential management and the specific processes the vendor, Arcadia, will follow. The goal is to offload the tasks of creating new credentials and managing existing ones. - **The vendor requires a specific data file template** containing essential customer information (account list, service addresses) to begin work. This file will distinguish between accounts needing new credentials and those with existing, working credentials that can be accessed via an API. - **For credential issues, the vendor will flag failures** and provide reasons (e.g., incorrect password). They classify failures into specific buckets, which will aid in troubleshooting and process improvement. - **A key question was raised about standardizing credential formats**. While the vendor will use the credentials provided, there was a discussion on whether they could update existing credentials to a more standardized, secure format, though this was not a confirmed part of the initial scope. ### Data Flow, System Integration, and Quality Control The conversation delved into the technical architecture for integrating Arcadia's services, focusing on where data will live and how quality will be assured. This is a critical component for ensuring the long-term success and reliability of the service. - **A major consideration is the final storage location for credentials and bill data**. The consensus is that the ultimate "source of truth" must be the internal UBM platform, requiring a sync mechanism from Arcadia, rather than giving customers direct access to the vendor's system. - **An "interpreter layer" is being considered** to act as a middleware between legacy systems (DSS/DDIS) and UBM. This layer would handle data from Arcadia and provide a point for remediation if data fails to process correctly. - **Robust quality control and reporting are deemed essential**. This includes managing an "Arcadia queue" to track missing records, field-level data discrepancies, and ensuring SLAs are met. The team acknowledged that dedicated reporting and close management, especially in the initial months, will be non-negotiable. ### Transition Strategy and Contingency Planning The team discussed a pragmatic approach to the transition, acknowledging that internal processes and system integrations will take time to build and stabilize. A backup plan is necessary to maintain operations during this period. - **A parallel, temporary process is proposed** where the internal team continues to use existing methods (KDFO credentials) to download invoices. This ensures business continuity while the new Arcadia integration is being developed and tested, despite the temporary double spend. - **The transition presents an opportunity to re-evaluate and streamline internal systems**, specifically the structure of the UBM platform. Questions were raised about the necessity of certain components, like the "build block," and the potential to reconfigure error codes and user interfaces based on customer-specific needs. ### Next Steps and Vendor Collaboration The meeting concluded with a focus on the immediate actions required to move the initiative forward, emphasizing the need for clear communication and defined processes with the vendor. - **The immediate next step is to push for contract finalization** from a legal perspective and to begin the internal IT process for setting up the vendor. - **Requesting a detailed process flow from the vendor is critical**. This map would clearly illustrate how a credential flows through their system and how success or failure is communicated back, ensuring both parties have aligned expectations. - **The vendor has offered training and materials**, which the team plans to take advantage of to better understand the standard workflow and any potential customizations needed.
DSS Planning
## Summary The meeting primarily focused on technical discussions surrounding the migration of the output service to a new system (DSS) and an analysis of the current OCR model implementation for invoice processing. Key decisions were made regarding the project's immediate direction and technical approach. ### Administrative Updates and Access Requests A brief administrative check-in occurred to address pending items and access permissions. There was a need to follow up with an absent colleague regarding feedback on invoice versus generic PDF processing. Additionally, progress was confirmed on obtaining access to OpenAI monitoring via Azure AI Foundry, though one team member was still awaiting final access, which would be facilitated if the initial request proved successful. ### Output Service Migration Strategy The core of the meeting was dedicated to the ongoing effort to move the output service. The current approach involves a deep analysis of the existing codebase to determine which components can be ported directly and which require architectural changes. - **Initial Migration Plan:** The immediate strategy is to copy the output service to the new DSS system while minimizing changes to the existing database interactions. This includes retaining build and batch tracking for the time being to avoid processing duplicates. - **Future Enhancements:** It was acknowledged that a subsequent phase will be necessary to rebuild certain components, specifically to create a service that coordinates between the new and old systems to prevent duplicate downloads and to address user activity tracking and reporting features. - **Focus on Core Functionality:** A key directive was to prioritize a straightforward migration of the core service for now, treating discovered requirements as future enhancements rather than immediate blockers to prevent scope creep. ### Proposed Architecture for the New Output Service A high-level technical architecture was proposed for the new output service within DSS to handle long-running processes without causing timeouts. The plan involves breaking down the workflow into three distinct, sequential queues. - **Data Compilation Queue:** The first queue would be responsible for gathering and compiling the raw data into a specific, required format. - **Data Validation Queue:** The second queue would perform validation checks on the compiled data to ensure its integrity and correctness before further processing. - **Data Transformation and Dispatch Queue:** The final queue would transform the validated data into its expected output format and subsequently dispatch it to its final destination. ### Analysis of the OCR Model for Invoice Processing A significant portion of the discussion centered on the OCR model used for extracting data from invoices, comparing the use of a general model versus an invoice-specific one. - **Current Process Justification:** The team confirmed that the current use of a **general OCR model** is intentional. The invoice-specific model was found to be inadequate because it has a fixed set of fields and cannot capture dynamic, bill-specific data like multiple meters or unique rate codes, which are crucial for this project. - **Proposed Experimentation:** A suggestion was made to experiment with using a more powerful model (O3) for the initial data extraction to potentially reduce noise and improve context for the subsequent data parsing step. - **Technical Discovery:** During the discussion, a crucial technical detail was uncovered: the current system only passes the raw text content from the OCR result to the LLM for parsing, not the full JSON output which includes valuable structural information like bounding boxes for text blocks. ### Technical Discoveries and Next Steps The meeting concluded with a plan to formalize the work and investigate the technical findings. - **Task Formalization:** It was agreed that the discussed work on the output service queues would be broken down into separate, detailed tickets in the project management system (JIRA) to facilitate tracking and discussion. - **OCR Code Investigation:** The discovery that only raw text is being passed to the LLM prompted a need for further code investigation to understand if leveraging the full OCR output (including structural data) could improve extraction accuracy. However, it was decided to defer deep work on the OCR model until more pressing issues are resolved.
DS Ai Bill Reader and Arcadia
## **AI Bill Reader Integration and Data Architecture** The meeting focused on integrating the AI Bill Reader service across multiple applications and establishing a centralized data architecture for the Constellation Navigator platform. The discussion centered on the current capabilities of the Data Services (DSS) invoice processing system, its limitations, and the strategic path forward for shared data ingestion. ### **Current AI Bill Reader Capabilities and Limitations** The DSS AI Bill Reader is operational but faces significant capacity constraints and architectural dependencies. The system currently processes utility bills through a multi-step pipeline: it first uses Microsoft's Intelligent Document Intelligence for OCR, then sends the extracted data to an OpenAI model for structured parsing, and finally applies UBM-specific enrichment logic. However, the service has a maximum daily processing capacity of approximately 3,500 bills, which is already under strain from existing UBM workloads. The OpenAI model experiences periodic reliability issues, with occasional 3-day response delays that create processing backlogs. The system currently operates with manual queue management, allocating 15 invoices every 5 minutes to the highest priority "prepaid live customers" queue, while other workflows like enrollment and historical processing receive lower priority. ### **Integration Requirements for Carbon Accounting and Glimpse** Multiple application teams require access to the bill processing capabilities, each with distinct use cases and data requirements. The Carbon Accounting team has already developed their data ingestion architecture and can consume the JSON output from the AI Bill Reader, but needs API access to send PDFs and receive structured responses. Their use cases include both historical bill processing and live monthly bill ingestion, though they have more flexibility on processing timelines compared to UBM's bill pay requirements. The Glimpse application primarily needs historical bill data rather than live processing. Both teams emphasized that they can work with the existing UBM-enriched data schema, as they only require subsets of the extracted fields rather than the complete bill pay information. ### **Centralized Data Layer Strategy** A critical consensus emerged around establishing a Navigator-wide data layer to prevent duplicate processing and enable cross-application data sharing. The proposed architecture would create a central repository where all processed bills are stored once, regardless of which application initiated the ingestion. This would allow applications to check for existing data before triggering new processing, particularly important given the AI capacity constraints. The system would need to handle multiple data sources including PDF uploads, Arcadia integrations, Utility API connections, and potentially direct flat file imports from utilities like Constellation. A shared customer identification system across applications is essential for this architecture to function effectively. ### **API Development and Access Priorities** Immediate technical work is needed to expose the DSS IPS service through documented APIs for other application teams. The API development must separate UBM-specific workflows from core bill processing functionality, removing dependencies on UBM client and vendor dropdowns that currently gate access. Authentication, rate limiting, and priority queuing mechanisms need to be established to manage access across teams. The initial focus will be on providing test API access while capacity scaling and reliability improvements continue. Documentation through Postman collections and ReadMe access will enable development teams to begin integration work despite the ongoing capacity constraints. ### **Data Taxonomy and Customer Identification Challenges** Significant challenges exist in establishing common data definitions and customer identification across applications. Each team currently has different definitions for fundamental concepts like customers, locations, and facilities. A Navigator-wide data dictionary is needed to map these concepts and enable cross-application data sharing. The customer identification problem is particularly complex-applications need to recognize when they're processing bills for the same customer across different systems. This requires either a centralized CRM-like system or robust cross-referencing capabilities between application-specific customer identifiers. ### **Alternative Data Source Integration** The discussion highlighted opportunities to reduce AI processing load by leveraging structured data sources when available. Arcadia provides both PDF bills and pre-structured JSON data, suggesting that for Arcadia-sourced bills, the system could bypass AI processing entirely. Similarly, Constellation utility provides flat files that could be processed directly without OCR. This approach would reserve limited AI capacity for bills that truly require OCR and intelligent extraction. The architecture needs to accommodate these multiple ingestion paths while maintaining data consistency and avoiding duplication. ### **Demonstration of Current DSS Capabilities** The live demonstration showed the current DSS interface and processing workflow, highlighting both capabilities and UBM-specific dependencies. The system extracts extensive data from utility bills including client account information, vendor details, line items, charges, and consumption data. However, the current implementation requires UBM-specific context like client and vendor identification before processing, which wouldn't work for other applications. The demonstration also revealed the complex workflow states and queue management system that would need to be abstracted for broader consumption. ### **Next Steps and Timeline Planning** The team established concrete next steps including scheduling a December meeting to advance the joint database design and customer identification strategy. Immediate actions include providing API documentation to development teams, establishing test access to the DSS system, and beginning the process of defining common data taxonomies. The longer-term roadmap involves scaling AI processing capacity, developing intelligent queue management, and creating the centralized data layer infrastructure. The approach emphasizes parallel progress-teams can begin integration work with current APIs while the broader architectural questions are resolved.
Azure MVP criteria and timeline
## Project Overview and Current Status The meeting centered on significant confusion and concerns regarding the migration of the UBM application to Azure, specifically focusing on the definition of a Minimum Viable Product (MVP) and an unrealistic end-of-2025 timeline. The core issue is a fundamental misalignment between the project's leadership and the technical team on what constitutes a deliverable product and the immense amount of work remaining. The team is currently unable to commit to any timeline due to unresolved dependencies and a lack of a clear, agreed-upon project plan. ## Confusion Surrounding Timelines and Commitments A primary point of discussion was the origin and validity of the stated project timelines, which appear to be based on outdated or misunderstood information. The team clarified that no formal commitments have been made for an MVP or a full migration by the end of 2025. - **Source of Timeline Confusion:** The discussed timeline appears to stem from an old Microsoft Project plan created in 2022-2023, which is no longer relevant to the current project's scope and challenges. - **Lack of Formal Commitment:** It was explicitly stated that the technical team has not committed to any timeline for the MVP, with the source of management's expectations remaining unclear. - **Shifting Goalposts:** The initial understanding was a Proof of Concept (POC) for August, which has now seemingly evolved into a more substantial MVP expectation by the end of the year, a target that is already unattainable in November. ## Defining the Minimum Viable Product (MVP) A major obstacle is the ambiguous and shifting definition of what the MVP entails. The team expressed that the current interpretation is insufficient for a customer-facing application and needs clarification. - **Internal vs. Customer-Facing:** There is a strong concern that the current MVP definition is akin to an internal test where "people can see data on a website," which is **not helpful for customers** and won't meet their needs. - **POC vs. MVP:** The project's goals have blurred between a simple Proof of Concept-focused on deploying the application in AKS and making it accessible-and a more robust MVP where almost all features are tested and functional. - **Current State:** The application currently has only very small parts running in the target Azure environment, with numerous issues and a lack of security approvals, indicating it is far from even a basic MVP. ## Critical Blockers and Dependencies The migration is severely hampered by external dependencies and technical roadblocks, primarily related to working within the Constellation tenant. These issues are causing significant delays and making accurate timeline prediction impossible. - **Access and Security Hurdles:** The team has faced prolonged issues with access permissions and limitations within the Constellation environment, which have stalled progress. - **Architectural Approval Delays:** A technically sound architecture has been proven in a separate, fully accessible Azure tenant, but it is still awaiting acceptance from the Constellation security team. - **Unpredictable Effort Multipliers:** Tasks that take a few hours in a controlled environment can balloon into multi-day endeavors in the Constellation tenant due to these unforeseen blockers, making traditional estimation unreliable. ## Proposed Path Forward and Key Messages To address the confusion and reset expectations, the team agreed on an immediate action plan to create clarity and formally communicate the project's challenges to leadership. - **Creation of a High-Level Project Plan:** The immediate next step is for the technical leads to develop a high-level plan outlining the general phases and remaining work, to be completed by the following Monday. This plan will serve to visually demonstrate the significant work remaining. - **Communicating Reality to Leadership:** The goal of the upcoming meeting with stakeholders is to ensure everyone understands that the end-of-2025 timeline is not feasible. The message will highlight the extensive work required and the team's dependency on other groups to resolve ongoing blockers. - **Establishing a Buffer for Consultation:** Given the historical struggles with simple tasks, the team plans to advocate for adding a significant buffer (suggested as four months) to any future timelines to account for the slow consultation and resolution process with other teams.
Roadmap Alignment
## Summary ### Operational Challenges and Team Efficiency The meeting opened with a discussion about the negative impact of back-to-back meetings on productivity, noting that constant context switching often prevents meaningful work from being completed and leads to suggested actions not being taken. A strategy was proposed to mitigate this by blocking out large, uninterrupted time slots-specifically, three-hour blocks-for focused work, with a preference for scheduling meetings in the morning to free up afternoons for deep work. ### Technical Tasks and Process Documentation A key action item involves the **NG downloader and image-to-PDF convergence**, which is needed for onboarding a client's data. While the task is urgent, there is a significant knowledge gap as no one on the team is currently familiar with the process. The decision was made to proceed with the work but to ensure it is properly documented as a standard onboarding exercise. This documentation will cover the entire workflow, including the location of scripts, the file formats they handle, and the upstream and downstream processes, to prevent similar knowledge silos in the future. Concurrently, there was an unresolved issue regarding **access to an Azure OpenAI instance**. Access requests had been submitted but were seemingly stalled in the system. A workaround was successfully executed to grant access, but the underlying problem with the request system itself remains and requires further investigation with the service desk. ### Roadmap and Priority Planning for Q4 The team reviewed and adjusted the roadmap for November and December, introducing new priorities that have emerged. **November Priorities:** - **Invoice Processing Scaling:** The system is now processing close to 3,500 invoices per day, exceeding initial goals. The immediate focus is shifting from pure scaling to **comprehensive tracking and dashboard creation**. This is crucial for monitoring API usage (like OpenAI token consumption) and providing data-driven evidence for any future capacity discussions with vendors. - **DSS Logic Fixes:** A dedicated effort is required to address underlying logic issues within the Data Services (DSS) and UBM systems. This has been deprioritized previously but is now being elevated to ensure system stability. - **"Unified Report" Development:** This initiative aims to create a comprehensive dashboard that provides a single view of a customer's status, from onboarding through the entire builder stack. The approach will be to have stakeholders provide specific Excel examples of their desired reports to streamline requirements gathering and set realistic expectations. **December and Beyond:** The team expressed concern over an overly ambitious December schedule. The strategy is to push for clearly defined requirements from stakeholders upfront and to resist becoming a de facto research and reporting team for other departments. The goal is to transition from reactive firefighting to a more stable operational state, which will enable work on strategic projects in the new year. One such project for the following year is a **reporting API/database**, identified as a critical feature for potential customers. ### Team Structure and Future Initiatives A significant topic was the future structure of the team, particularly a planned transition where a key member would move into a Bill Pay-focused role. This transition is contingent upon achieving stability in the Data Services domain and is not considered feasible until the current operational challenges are resolved. The long-term vision is for the UBM team to evolve into a proper platform team, freed from daily QA and data validation issues, so it can focus on building new features like the reporting API. A plan was set to begin detailed planning for 2026 initiatives in a shared document to further refine timelines and responsibilities. ### Arcadia Client Migration Finally, the upcoming migration for the Arcadia client was discussed. Two technical approaches for data ingestion are being considered: treating Arcadia similarly to an existing client (Biowatcher) or developing a new pipeline based on their API structure. A proof-of-concept will be initiated to determine the best path forward. It was strongly emphasized that the team's responsibility is limited to building the pipeline, with ongoing maintenance and error resolution remaining the responsibility of the Arcadia Customer Success Managers.
Azure MVP Alignment
## Summary The meeting primarily focused on reviewing a password policy document and discussing the significant challenges and risks associated with the ongoing migration from GCP to Azure. A major point of concern was the definition and feasibility of an MVP (Minimum Viable Product) for the Azure environment, given the compressed timeline and numerous unresolved technical and procedural hurdles. ### Password Policy Review A previously shared password policy document was reviewed, confirming no recent updates to its specifications. The policy mandates a minimum password length of 12 characters and a maximum of 64 characters, enforces multi-character requirements, sets a lockout after five consecutive failed attempts, and maintains a password history of 36 months. A separate, unchanged policy exists for service accounts. ### Azure Migration Status and MVP Definition The corporate IT team has requested a prioritized timeline for an Azure MVP, but the project is mired in uncertainty and faces extreme risks. The migration has been discussed for nearly two years with minimal progress, and the current expectation to deliver an MVP by December is considered nearly impossible due to the upcoming holiday season and a lack of foundational work. The team expressed that they cannot commit to any timeline until critical issues are resolved. - **Lack of Progress and Clarity:** The project is still in early architectural discussions, with basic questions about the application's functionality still being asked by the corporate team, indicating a fundamental lack of understanding. - **High-Risk Timeline:** Attempting a migration in the remaining weeks of the year is deemed unfeasible, and there is a strong reluctance to define a limited MVP that could be misinterpreted as project completion, leaving critical functions untested. - **Security and Access Roadblocks:** A primary roadblock is obtaining security approval to expose the migrated database to the public, which is essential for any end-to-end functionality. Furthermore, development and testing have so far occurred on a test subscription, not the actual Constellation environment, introducing significant unknown risks. ### Technical and Procedural Challenges Several specific technical and operational challenges were identified that complicate the migration and post-migration support. These issues highlight a disconnect between the corporate IT team's objectives and the practical realities of the application's needs. - **Complex Integrations:** Key functionalities like Single Sign-On (SSO), Power BI integration and embedding, database exposure for other tools, and data integration with Karbon accounting remain as major, untested challenges. - **Loss of Control and SLAs:** In the new Azure/Constellation environment, the team anticipates a loss of administrative control, preventing them from performing same-day fixes or adhering to current Service Level Agreements (SLAs). This necessitates a redefinition of SLAs and potentially a dedicated internal IT liaison. - **Disaster Recovery Concerns:** The current disaster recovery capability in GCP, which allows for a full rebuild in under 24 hours, is at severe risk. The approval processes and lack of control in Azure are expected to drastically increase recovery times, with no commitment from Microsoft or corporate IT on who will own and meet these recovery objectives. ### Integrity Check Errors and Virtual Account Mapping A significant operational issue involves a high volume of "3018" integrity check errors related to virtual account mapping. While a bulk upload feature is in development to ease the manual workload, the root cause of the errors is more complex than initially thought. - **Upcoming Bulk Upload Solution:** A new feature is being developed to allow bulk downloading of unmapped virtual accounts and bulk uploading of corrected mappings across customers, aiming for completion by the second week of December. This will automate the current manual process handled by a specific individual. - **Ongoing Root Cause:** The errors are not solely due to location address discrepancies. The core issue is the continual creation of *new* virtual accounts, which occurs when there are variations in any of the six key attributes that define a virtual account (e.g., billing ID, service account ID, meter ID). This suggests underlying problems with data consistency from the source system that need to be addressed to prevent the errors from recurring. ### Data Consistency and Fuzzy Logic The discussion explored interim and long-term solutions for handling the virtual account mapping errors. The immediate goal is to automate the existing manual mapping process to clear the backlog, while a future-state solution would involve implementing fuzzy matching logic to automatically map similar strings (e.g., "Avenue" and "Av") and prevent the errors from appearing in the first place. The team plans to test a proposed fuzzy logic algorithm against recent manual mappings to validate its effectiveness before implementation.
DSS/UBM Errors
## Summary ### The Core Problem: 3018 Backlog and Unmapped Virtual Accounts A significant operational challenge is a backlog of approximately 2,300 items, identified by error code 3018. This error signifies that incoming bills cannot be automatically matched to an existing "virtual account" within the system, preventing proper processing and billing. The root of the problem lies in the system's strict requirement for an exact match across six specific data points to map a bill to a virtual account. When even a minor discrepancy occurs-such as a variation in an address format (e.g., "AVE" vs. "Avenue")-the system fails to recognize it as a match and creates a new, unmapped virtual account instead. ### Root Cause Analysis: The Six-Factor Matching Logic The system's virtual account creation and matching logic is based on a stringent comparison of six specific attributes from the bill data against the data provided during the customer onboarding process. These six factors are: - Account Code - Bill Type - Commodity - Vendor Code - Meter Serial Number - Client Account The matching process is purely string-based and exact. **Location data is not one of the six matching factors**; it is a separate attribute that is mapped *after* a virtual account is successfully identified. The core issue is that minor inconsistencies in any of the six primary fields cause the entire match to fail. ### Current Manual Process and Its Limitations The existing workflow to resolve these unmapped virtual accounts is manual, time-consuming, and involves multiple people, creating a bottleneck. The process involves exporting a list of unmatched items, manually comparing them against a master list of onboarded locations (often using a spreadsheet), and then using an admin tool to bulk-attach the correct virtual account. This manual intervention is necessary because the system's fuzzy logic for addresses is not being applied at the virtual account matching stage, only later for location mapping. ### Proposed Automation and Interim Solutions To address the growing backlog and inefficient process, a two-pronged approach was discussed. The immediate goal is to automate the manual matching step currently performed by the operations team. This would involve a script that programmatically compares the unmatched virtual accounts with the existing location data, using matching thresholds (e.g., a 90% similarity score on addresses) to suggest or automatically perform the correct mapping. For the short term, establishing a strict weekly cadence for this manual process was emphasized to prevent the backlog from ballooning again, even though it remains a temporary, resource-intensive fix. ### Related Technical Discussions The conversation also touched upon other ongoing projects and issues. There is significant concern and confusion surrounding the timeline and scope of a planned Azure migration, with a need to align the entire team on priorities and realistic expectations. Additionally, a separate data discrepancy was mentioned, where a user appears to be filtering on an incorrect data field, leading to confusion in reports. A brief mention was also made of procuring Power BI licenses, which is currently blocked on access to the corporate financial system.
Faisal <> Sunny
## Summary ### Team Updates and Resource Allocation Progress is being made with the development team, with engineers consistently completing tickets and showing improvements. Access to FDG has been granted to address long-standing bugs causing human errors in invoices. Priyanka's delayed availability was noted due to her commitment to another team and personal constraints (needing to leave by 5 PM local time for childcare). The plan is for her to focus on UDM reporting first, while other developers handle the Idea Exchange and Zone tasks. Cross-pollination during overlapping work hours is prioritized over extending her availability. ### Power BI Reporting Enhancements A consolidated Power BI report is identified as a critical goal ("holy grail") for unified visibility. Current reports track metrics like meters processed, which are useful for operations but irrelevant for leadership. Efforts are underway to: - **Reframe reporting around invoices instead of meters**: Providing clearer insights for stakeholders beyond the operations team. - **Grant new developers access to Power BI**: Enabling them to understand underlying data, improve existing reports, and eventually contribute to the consolidated dashboard. Access to the FDG dashboard is already approved without additional approvals. - **Address data visibility gaps**: Creating self-serve reports to reduce dependency on manual status updates (e.g., explaining error codes, indicating responsible teams for resolution). ### Invoice Processing and Scalability Significant progress has been made in managing invoice backlogs through manual queue optimization: - **Queue Structure**: Three queues are now managed: High-priority (Prepay/UEM live bills), Priority (specific client requests like Victor), and Backlog (enrollment, project makeup, history). Names may be refined. - **Backlog Reduction**: Efforts reduced the queue from over 1,000 to around 300-500 invoices, even while fixing a reprocessing bug affecting foreign currency bills. - **Processing Monitoring**: Current processing times are within the 24-hour target window (e.g., 16-17 hours observed). Processing frequency is adjusted (e.g., updates every 10 minutes) and capacity is increased on weekends. - **Scalability Analysis**: The immediate solution involves increasing the Azure OpenAI quota via Microsoft. The next tier (dedicated team CPU) is deemed potentially overkill and cost-ineffective for current volumes. Understanding SLAs and thresholds for escalation remains a priority. Manual management is sufficient for now while systematic solutions are explored. ### OpenAI Access and Invoice Processing Capability Access to the OpenAI node for developers is pending due to prerequisite training/LMS access requirements. Requests are in progress, with access expected soon. This is part of the broader "invoice processing scalability" effort to understand current capabilities, options, and future needs before engaging Microsoft. ### Operational Visibility and Workflow Challenges Persistent challenges exist in tracking invoice statuses across systems (DSS, UBM, PayClearly), leading to inefficiencies and repeated status inquiries. Proposed solutions include: - **Adding key status columns**: Integrating UBM and PayClearly statuses into tracking reports for immediate clarity. - **Developing self-serve tools**: Empowering users to find statuses and next steps independently, reducing the burden on the core team for routine inquiries. - **Untangling complex workflows**: Time is needed to systematically resolve underlying issues causing delays and confusion, but operational demands often interrupt this work. ### Emergency Payment Processing A request emerged to split AP files sent to PayClearly into emergency and non-emergency payments: - **Current State**: AP files are currently undifferentiated. - **Plan**: Requirements are being finalized. The non-emergency split is targeted for completion within a week, while the emergency split is aimed for November but faces uncertainty due to requirement finalization and workload. Expectations are being managed cautiously. - **Ownership**: Clarification is needed on how emergency payments are identified/tagged (e.g., adding a database flag in UBM) to determine implementation effort and avoid overloading the team. ### December Planning and Prioritization Preliminary discussions about December goals highlighted concerns about the volume of requested work being unrealistic. A review of the prioritized spreadsheet is planned for an upcoming meeting to align on feasible deliverables.
Faisal <> Sunny
## Summary ### November Priorities The team is focusing on several key deliverables for November. Invoice processing enhancements are underway, with technical changes being led by Faisal. A baseline report is in testing phases, though completion is delayed due to scheduling conflicts with Sunny. HubSpot automation for ticket creation is nearly complete, pending final review with Faisal and Sunny later in the week. - **Invoice Processing**: Technical upgrades to improve capabilities, specifically assigned to Faisal rather than Terra. - **Baseline Report**: Testing delayed; collaboration with Sunny required to finalize and share with the team. - **HubSpot Automation**: Final stages of development; cross-functional review meeting scheduled to wrap up implementation. ### December Workload Management December deliverables are being scrutinized for feasibility, with some items deferred to January. The team aims to avoid overcommitment by prioritizing critical path items. - **MFA Downloads**: Deprioritized due to potential overlap with broader web download initiatives; may be eliminated entirely. - **Onboarding Pipeline**: Remains a priority for Mike, with no changes to timeline. - **Integrity Checks**: Short-term fixes are actively progressing through collaboration; scope adjusted to split phases 2 and 3, with phase 3 redefined for permanent solutions. - **Review Rules**: Likely deferred due to bandwidth constraints from integrity checks and onboarding workloads. - **Power BI Report Updates**: Minor enhancements (sorting/filtering) to existing reports confirmed as manageable within December. ### January Considerations Several initiatives are shifted to January to balance Q4 capacity. New report development and system changes require deeper analysis before execution. - **Invoice Download & New Power BI Report**: Postponed to allow better understanding of existing reports and alignment with other priorities. - **MFA Removal from Personal Phones**: Moved to January alongside related MFA initiatives. - **Data Validation Errors**: Cross-team effort requiring coordination; incremental progress expected but full resolution extends into January. - **Interpret Layer**: Timing dependent on outcomes of active work; may accelerate if dependencies resolve. ### Timeline and Resource Adjustments Resource allocation and deadlines are actively refined based on capacity and dependencies. Key adjustments include: - **December Commitments**: Focus retained on achievable items like system status reports and onboarding pipelines, while complex items (e.g., data services changes) are deferred. - **Data Validation Approach**: Incremental fixes prioritized over comprehensive solutions due to cross-team coordination needs. - **Reporting Cadence**: System status report development remains on track for November/December. ### Next Steps and Coordination A follow-up meeting is needed to finalize December deliverables. Scheduling conflicts (travel, onsite meetings) complicate coordination, prompting asynchronous updates. - **Sunny Collaboration**: Faisal to meet with Sunny imminently to align on report testing and interpret layer timing. - **Leadership Alignment**: Plans to compile completed/pending work into a shareable format for management visibility. - **Meeting Logistics**: Thursday discussion between Faisal and the lead to consolidate updates before broader team review next week.
OpenAI Monitoring
## Power BI License Management Discussions centered on acquiring additional Power BI licenses and optimizing existing subscriptions. Two more licenses are needed for new engineers to access data tables and dashboards. The purchasing process requires navigating Microsoft's admin center, with potential challenges in identifying the correct subscription options. Existing unused licenses (like Fabric and Power Apps) were flagged for cancellation to reduce costs. A review of current subscriptions revealed discrepancies between purchased and active licenses, prompting further investigation into billing structures. ## Data Acquisition Queue Challenges A critical issue emerged regarding foreign currency bills causing processing loops in the data acquisition system. Currently, bills identified as foreign currency are moved to a separate queue but lack a persistent flag to prevent reprocessing. This results in the same bills being cycled repeatedly, creating backlog inefficiencies. As an immediate solution, foreign bills will be rerouted to the data audit queue to break the loop. Long-term solutions proposed include adding an "is_foreign" flag to the bill table or implementing Cosmos DB checks to skip processed bills. ## OpenIQ Processing Backlog Approximately 1,500 invoices are delayed in OpenIQ due to token limitations and insufficient monitoring. The system operates within a 4,000-token request window, but current tracking lacks real-time visibility into usage against the quota. This opacity complicates backlog management and risks overwhelming the system. Proposed next steps include: - **Building a monitoring dashboard**: Integrating token metrics into Application Insights or Grafana by adding metadata fields to batch records. - **Developing a load balancer**: After monitoring is implemented, optimizing request distribution to prevent bottlenecks. Urgency was emphasized due to the growing queue and potential for systemic failure if limits are exceeded. ## Telerik License Acquisition New Telerik licenses are required for onboarding engineers, but the process remains unclear. Past licenses were managed via the Telerik website using company credentials, though current access details are unavailable. Chris Busby was identified as the primary contact for license procurement, given his historical involvement in external vendor management. ## System Updates and Integrations One update involved synchronizing UBM (Universal Billing Management) with external systems. Credentials from BS (Billing System) accounts periodically require manual updates in UBM, necessitating coordination between teams to maintain system alignment. The complexity of this process highlighted potential areas for automation or streamlined workflows. ## Infrastructure Cost Review A preliminary audit of Microsoft subscriptions revealed potential inefficiencies. Services like Microsoft Entra ID P2 and Entra ID Governance showed unclear utilization patterns, prompting plans to: - **Identify redundant services**: Scrutinize underused products (e.g., Fabric) for cancellation. - **Clarify billing allocations**: Resolve discrepancies between purchased licenses and active users, particularly for Power BI and Microsoft 365. Cost management will be prioritized to align subscriptions with actual operational needs.
DSS/UBM Errors
## Review of Short-Term Fixes and Location Mapping Process The team reviewed the ongoing list of short-term operational fixes, focusing on the manual location mapping process currently handled by Tim. This process involves matching addresses string-by-string with significant manual verification, which is time-consuming and prone to errors like mismatched buildings (e.g., Sheets and Royal Farms locations). Challenges include: - **Address matching complexity**: When addresses have minor discrepancies (e.g., "S" vs. "South"), manual eyeballing is required, slowing down bulk operations. - **Downstream impact**: Incorrect mappings affect linking to AP files and data pairing, compromising analytics accuracy. - **Exclusion protocols**: Certain entities (e.g., Sheets, Royal Farms) are excluded from automated mapping due to recurring mismatches and client complaints. ## Automation Strategies for Location Mapping To address manual bottlenecks, the team explored automation solutions: - **AI-driven fuzzy matching**: A spike is underway to test AI-based matching that calculates probability scores (0-1) for address similarities, reducing reliance on manual checks. - **Leveraging existing tools**: The Carbon Accounting team's internal API was proposed for adoption; it returns match probabilities (e.g., 0.95 for "South" vs. "S") and could be integrated into the platform. - **Interim bulk-upload functionality**: An Excel macro was shared to pre-process unmatched addresses, allowing threshold-based filtering (e.g., matches >0.95) before platform uploads. ## Handling 2024 Past-Due Issues Operational adjustments were discussed for managing past-due customer accounts: - **Bulk upload capability**: Instead of full automation, a controlled upload feature will be developed for operators to review and resolve past dues, accommodating client-specific rules (e.g., partial vs. full payments). - **Root-cause focus**: Emphasis on reducing backlog volume by improving processing speed rather than automating removals, as unresolved past dues stem from systemic delays. ## Unit of Measure (UOM) Errors A spike in UOM errors (e.g., MCF for electricity) was analyzed, particularly for mock bills from newer clients like Dakota: - **Data source issues**: Invalid UOMs (e.g., MCF for electricity) originated from client-provided files (e.g., URA), requiring validation against commodity types. - **Resolution protocol**: Errors will be escalated to the data source owner (e.g., Ali’s team) to correct files and remap commodities, avoiding platform overrides that could distort analytics. ## Operational Workflow Adjustments Minor process optimizations were proposed: - **Total charge mismatch checks**: Frequency reduced from daily to 2-3 times weekly, as manual adjustments (e.g., creating corrective lines) suffice for current volumes (~100 mismatches). - **Error prioritization**: Focus remains on high-impact validations (e.g., linking, pairing) while deprioritizing low-risk mock bill discrepancies.
November Planning Review
## Summary ### November Deliverables Planning The team focused on refining November's deliverables, initially three items, with additional cross-month tasks spanning November and December to maintain visibility. Key adjustments included incorporating two new Data Services priorities and reclassifying items to reflect ongoing work accurately. - **Cross-month task tracking**: To avoid underrepresenting active initiatives, items with December end dates but November progress will appear as cross-month, ensuring transparency on current efforts without overloading the December list. - **Data Services additions**: Two critical November items were added: doubling invoice processing capacity (currently 1800/day) requiring technical changes, and addressing emergency payment AP file splitting which had already started development. - **Progress updates**: Existing November items were reviewed for active status, with plans to add in-progress notes and reassess any December tasks that might be accelerated or deferred. ### Emergency Payment AP File Enhancements Phase 1 of emergency payment improvements involves splitting AP files and individualizing payments, with timeline adjustments due to technical complexity. - **AP file separation**: The first story (splitting emergency payments into dedicated AP files) is on track for November completion, with development actively underway and requirements finalized. - **Individual payment processing**: The second story (enabling per-payment files) may extend into December due to unresolved requirements and coordination needs with external teams; operational workarounds are being explored to mitigate delays. - **Priority alignment**: Both stories were confirmed as Phase 1 requirements after clarifying that initial plans underestimated the scope, with Phase 2 deferred to December pending further analysis. ### Power BI Reporting Initiatives Credential management and data alignment issues are impacting Power BI report development, prompting timeline adjustments and prioritization discussions. - **Data synchronization challenges**: Discrepancies between Power BI reports and source systems (e.g., Pay Clearly) require urgent review, as current data pulls fail to match operational views, delaying dashboard finalization. - **Credential consolidation**: Multiple credential sources (UBM, Data Services, Smartsheet) complicate account health reporting; establishing UBM as the single source of truth is essential but requires foundational changes deferred to next year. - **December capacity concerns**: With 10+ Power BI reports slated for December, feasibility is uncertain due to holiday constraints; mock-ups exist for some, but requirements must be prioritized to avoid overload. ### December Priorities and Capacity Management Upcoming deliverables were scrutinized to balance urgency with realistic bandwidth, emphasizing high-impact items and deferring lower-priority work. - **Top December commitment**: Emergency Payment Phase 2 (individual payment files) was prioritized above all other items due to customer impact, though operational workarounds (e.g., delayed bank polling) could reduce pressure if development slips. - **Lower-priority deferrals**: Non-urgent Power BI reports (e.g., holistic EDM account health view) may shift to 2024, as credential source conflicts and data alignment issues require upstream fixes first. - **Capacity safeguards**: A "line in the sand" approach will be used to cap December deliverables, with trade-off discussions planned to ensure only critical items proceed given holiday disruptions. ### Follow-Up and Alignment Strategy Coordination processes were refined to maintain momentum, including dedicated sessions for December prioritization and cross-team expectation setting. - **November commitment lock-in**: A follow-up meeting will confirm November deliverables, particularly Data Services’ invoice processing and emergency payment components, ensuring no scope creep. - **December triage protocol**: Technical and product leads will pre-assess December requests, ranking them by impact and feasibility before stakeholder negotiations; contentious items will require formal justification to displace existing priorities. - **Documentation updates**: Power BI dependencies will be explicitly tagged in tracking sheets, and cross-month items will retain visual indicators to highlight ongoing progress beyond monthly boundaries.
Output Service Overview
## Meeting Focus and Objectives The primary goal was to refine the migration strategy for the output service from the legacy system (DDIS) to the new Data Services System (DSS), emphasizing validation logic, fail-safe mechanisms, and architectural improvements. Key objectives included understanding the existing output service workflow, identifying necessary refactoring for scalability, and differentiating between core validation logic and client-specific (UBM) rules to enable future adaptability. ## Current Output Service Functionality The output service acts as the final step before pushing processed bills to UBM, handling validation, data compilation, and error management. - **Validation checks**: Multiple layers ensure data integrity, including verification of client-vendor existence, invoice date ranges, and charge calculations. Failures disable auto-output flags and trigger email alerts for manual intervention. - **Error handling**: Bills failing validation revert to a "waiting for system" status (Bill Status 3), requiring ops teams to resolve issues in designated workflows (e.g., account setup for bills exceeding six pages). - **Edge-case workflows**: Foreign-currency bills for new accounts route to audit workflows, while duplicates are flagged via checksum comparisons or invoice metadata (date/amount) and redirected to deletion or data acquisition queues. ## Migration Plan to DSS Transitioning the output service to DSS involves significant refactoring to address technical constraints and improve maintainability. - **Architectural changes**: Breaking monolithic validation, compilation, and output steps into separate queues to avoid Azure timeout limitations and simplify future updates. - **Data enrichment**: Ensuring DSS supplies all necessary data (e.g., client service items) previously pulled from DDIS databases, requiring alignment between LLM outputs and legacy validation requirements. - **Fail-safe logic**: Implementing mechanisms to reroute bills to legacy systems when DSS processing fails, particularly for excluded bill types or UBM-specific edge cases. ## Effort Estimation and Approach The migration was estimated at 2-4 weeks, prioritizing incremental delivery of functional components. - **Phased implementation**: Initial focus on replicating core output service logic in DSS (1-2 weeks), followed by UBM-specific validations and data-gap resolution (additional 1-2 weeks). - **Risk mitigation**: Using LLMs for code translation (C# to Python) cautiously, with emphasis on manual review to ensure accuracy and maintain separation of concerns. ## Duplicate Resolution Challenges Current duplicate detection methods were scrutinized for reliability gaps. - **Checksum limitations**: DSS’s PDF-based checksum fails to detect duplicates when bills arrive in different formats (e.g., scanned vs. digital), allowing mismatched files to bypass detection. - **Proposed enhancements**: Combining checksums with invoice metadata (dates, amounts) as a secondary safeguard, though implementation was deferred pending further analysis. ## Future-Proofing and Long-Term Strategy Discussions emphasized decoupling client-specific logic from core services to support potential multi-client scalability. - **UBM logic isolation**: Identifying and annotating UBM-specific validations (e.g., usage-description checks) during refactoring to facilitate later extraction into a middleware layer. - **Externalization potential**: Designing DSS to serve as a reusable data-ingestion engine, independent of UBM’s evolving requirements, to enable internal or external service offerings. ## Next Steps and Collaboration Immediate actions include creating JIRA tickets for the output service migration and prioritizing urgent fixes. Collaboration will focus on documenting validation rules and aligning duplicate-resolution approaches between legacy and DSS systems.
Weekly DSS call
## Meeting Context and Attendance The meeting began with minimal attendance, attributed to end-of-month billing activities and conflicting customer meetings. Participants noted that absent members were likely handling critical financial tasks, which aligns with recurring monthly cycles. This context framed discussions around operational efficiency and system adjustments. ## Data Acquisition Queue Management Key focus was on optimizing invoice processing queues to handle backlog and improve throughput. ### Queue Structure and Current Status - **Dual-queue system**: A main queue handles general processing, while a priority queue (currently for "Vikla/Victor" clients) is nearly cleared. - **Backlog volume**: 3,701 invoices pending in the data acquisition queue, with processing batches set at 10-15 invoices per run. - **Priority clearance**: The priority queue was down to 0-4 items, allowing reallocation of resources to the main queue. ### Processing Metrics and Thresholds - **Ideal processing time**: 5-10 minutes per batch; exceeding 10 minutes risks queue buildup and timeouts. - **Real-time monitoring**: Relies on IPS time-taken metrics (e.g., 300 seconds = optimal) rather than unreliable meter-based reports. ## Weekend Processing Strategy Leveraging off-peak hours to accelerate backlog clearance. ### Tactical Adjustments - **Increased batch size**: Planning to raise processing from 10 to 12-14 invoices per batch during low-traffic periods (nights/weekends). - **Traffic-based scaling**: Confirmed historical success with bulk processing during weekends/holidays due to reduced external API traffic (e.g., Microsoft/OpenAI). - **Dynamic prioritization**: Proposed shifting to a logic-based system to auto-allocate slots between live bills, historical pulls, and priority clients. ## System Limitations and Reporting Gaps Critical observations on tooling inefficiencies. ### Data Visibility Issues - **Inconsistent reporting**: Discrepancy between meter counts (3,163) and actual invoices (3,701) highlighted unreliable dashboard metrics. - **Manual workarounds**: Required direct log checks to verify queue status due to inadequate UI. ### Operational Risks - **Queue overflow**: Processing delays beyond 10 minutes could cascade into backlog accumulation and system timeouts. ## Strategic Roadmap Long-term solutions to enhance processing capacity. ### Output Service Overhaul - **Legacy system bypass**: Exploring a streamlined pathway to skip redundant steps in data population, accelerating throughput. - **Dynamic queue allocation**: Designing a system to auto-distribute batch slots based on client priority and invoice urgency. ### Reporting Enhancements - **UI improvements**: Urgent need for intuitive dashboards showing real-time invoice counts (not meters) and processing health.
Constellation bills
## Process Automation for Constellation Bill Retrieval The meeting focused on improving the automated retrieval of Constellation energy bills to eliminate manual PDF extraction from CEP (Constellation Energy Portal). An existing workflow designed to pull invoices into a drop folder exists but suffers from reliability issues and inconsistent coverage. Key challenges include: - **Unreliable Existing Workflow**: The current Laserfiche-based process, initiated years ago, frequently fails or misses accounts. Its enrollment mechanism is poorly documented, with no clear ownership. - **Scope Gaps**: It’s unclear whether the workflow captures all Constellation accounts, particularly commodity accounts where the company acts as the supplier. EDI (Electronic Data Interchange) invoice inclusion is also uncertain. ## Identifying "Pair Accounts" for Automation A critical step involves defining identifiers for commodity accounts ("pair accounts") to ensure targeted automation. Discussion centered on: - **Deal Factor Analysis**: Proposing to use consistent deal factors (e.g., `contract rate`, `pair cost`, or `rate per block`) as markers. These metadata fields could flag accounts needing automated bill retrieval. - **Data Verification Challenges**: Historical inaccuracies in customer numbers and deal factors complicate identification. Cross-referencing deal IDs or LDC accounts was suggested but noted as error-prone. ## Next Steps and Ownership Immediate actions were delegated to resolve workflow gaps: - **Technical Investigation**: Matt and JP will analyze commodity billing files to pinpoint a common deal factor across all relevant accounts. - **Stakeholder Follow-Up**: A separate meeting with the Navigator development team (including Maurice) is needed to address technical implementation, especially regarding EDI invoice inclusion. ## Outcome Goals The initiative aims to create a scalable, error-proof process that: - Automatically enrolls new Constellation accounts upon creation. - Captures all bill types (PDF and EDI) without manual intervention. - Uses unambiguous identifiers to avoid missed accounts.
DSS Planning
## System Capacity and Constraints The meeting focused on understanding and addressing current system limitations related to token processing quotas. Key constraints include a **1.5 million tokens per minute rate limit** and a **1 billion in-queue tokens capacity**, with the per-minute limit identified as the primary bottleneck. Current usage averages only 8% of total capacity, but sporadic spikes (e.g., 500K-token invoices) risk exceeding thresholds. - **Throttling experiments**: Proposed gradually increasing batch sizes (e.g., from 10 to 15 invoices per batch) instead of adjusting frequency to minimize system "chattiness" and avoid rate-limit breaches. - **Queue prioritization**: Existing queues (live bills vs. priority) will be optimized using a "progressive logic" approach: high-priority queues exhaust allocated capacity first before lower-priority queues activate, preventing resource contention. ## Failure Handling Mechanisms Discussed the system’s existing fail-safes for rate-limit breaches. When token limits are exceeded, requests fail immediately but remain queued in DSS (Digital Submission System), triggering email alerts for manual reprocessing. - **Automatic retry risks**: Avoided due to potential "double-dipping" (retries consuming additional capacity during peak loads), which could worsen congestion. - **Fallback strategy**: Manual intervention remains preferred unless failure rates spike; gradual capacity scaling minimizes failures. ## Deployment and Quota Optimization Reviewed redundant Azure deployments ("O3 batch" vs. "O3 batch 2"), with the latter unused and holding half the quota (500M tokens) of the primary deployment. - **Quota consolidation**: Deleting "O3 batch 2" and reallocating its capacity to the main deployment simplifies management and maximizes usable tokens under one subscription. - **Subscription-level limits**: Emphasized that splitting quotas under the same subscription doesn’t increase overall capacity-only separate subscriptions (e.g., prod vs. non-prod) enable true scaling. ## Output Service Migration Plans to migrate output generation from the legacy system to DSS to reduce bottlenecks. The legacy service stalls under load and lacks resilience, causing operational delays. - **Direct-to-E2BM pipeline**: DSS will generate CSV/ZIP outputs directly for processed invoices, bypassing legacy dependencies. Legacy fallbacks will handle exceptions. - **Data decoupling**: Critical fields for output (e.g., billing metadata) must reside in DSS’s Cosmos DB, eliminating template pulls from the legacy database to ensure self-contained processing. ## Long-Term System Scalability Explored strategies to minimize legacy-system reliance, particularly for newer meters (post-June 2025). - **Greenfield processing**: Invoices from newer meters should flow end-to-end in DSS without legacy interactions, using Cosmos DB as the single source for output data. - **Three-phase roadmap**: 1. Replicate output service in DSS. 2. Reduce legacy load by only pushing data during failures. 3. Fully migrate all invoice processing to DSS. - **Architectural shift**: Outputs must derive solely from build data (not templates), enabling stateless processing and eliminating legacy database bottlenecks.
Missing Bills Report
## Report Requirements and Initial Misunderstanding The meeting began by clarifying a misunderstanding about report logic. The initial report focused solely on **processed invoices**, but the actual need was to analyze **bills still in the queue** (unprocessed) to prioritize actions and prevent service disconnects. This distinction is critical because processed invoices are already resolved, while queued bills require urgent attention to meet due dates. ### Current Backlog Status and Progress Significant progress was made in reducing the backlog: - **Prepay queue reduction**: From over 9,300 meters to ~5,300 meters in one day, with Aspen’s data acquisition queue fully cleared. - **Current status**: 6,096 meters remain in the prepay queue, including overdue items and those due imminently. Despite improvements, the team acknowledged ongoing challenges with timeliness, particularly for vendors with short payment windows. ### Due Date Challenges and System Constraints Due date management faces complexities due to processing delays and vendor variability: - **Vendor payment terms**: Range from 7 days to 3 months, complicating prioritization when bills are downloaded late. - **Impact of delays**: Bills downloaded a week late risk missing short payment windows (e.g., 7-day terms), increasing disconnect risks. - **Cycle day validation**: Used in data capture to flag anomalies (e.g., a 180-day cycle vs. the typical 30 days), but this data isn’t fully leveraged for due date forecasting. ### Report Enhancement Proposal To address visibility gaps, the team proposed modifying the existing report: - **Filter for incomplete bills**: Exclude processed invoices to focus solely on queued items, using workflow/bill status fields (e.g., "in progress"). - **Data usability**: Enable copy-paste functionality for account IDs to streamline manual processing if needed. - **Field additions**: Include payment days and cycle days to estimate due dates, though incomplete header data from OCR/data acquisition limits accuracy. ### Strategic Trade-offs and Communication A key discussion centered on balancing effort versus impact: - **Feasibility concerns**: Building a comprehensive "in-queue" report requires significant model adjustments, potentially diverting resources from backlog reduction. - **Stakeholder alignment**: Plans to clarify with leadership whether to proceed with report enhancements or focus entirely on clearing the backlog, given the high operational lift. - **Interim solution**: Using historical averages (e.g., median payment timelines) to estimate due dates for queued bills as a stopgap. ### System Access and Follow-up Abhinav’s access to monitoring tools (FDG) was confirmed, though he primarily relies on automated Power BI reports. The team committed to: - Implementing report filters for incomplete bills. - Sharing updates via internal channels once modifications are live. - Reassessing the need for advanced due date tracking based on leadership feedback.
UBM Project Group
## October Deliverables Review The team assessed progress on October initiatives, completing two items while moving others to November due to testing requirements. - **Completed items**: - **Validation error reporting**: Implemented functionality to filter data by specific error messages, resolving past discrepancies. - **Integrity check documentation**: Finalized error documentation framework, though ongoing monitoring for new errors remains necessary. - **Moved to November**: - **Customer onboarding pipeline**: Requires additional testing and resource alignment before deployment. - **Invoice processing report**: Development complete but encountering refresh instability during testing; troubleshooting underway. - -- ## November Initiative Prioritization Key deliverables were evaluated for feasibility, with several items deferred to December due to resource constraints and emerging priorities. - **Deferred to December**: - **Onboarding pipeline phase two**: Dependent on developer capacity currently allocated to higher-priority system enhancements. - **Invoice download integrity**: Competing with urgent support tasks requiring immediate attention. - **Rules review**: Timeline impacted by developer availability during holiday periods and competing UBM workloads. - **Retained in November**: - **Integrity check fixes**: Documentation updates proceeding pending confirmation of developer bandwidth for implementation. - -- ## Emergency Payments Reconciliation A critical solution was proposed to streamline emergency payment processing through dedicated workflows. - **Core challenges**: Manual reconciliation causes delays when standard AP batches mix emergency payments with routine transactions. - **Proposed solution**: - **Dedicated AP files**: Isolating emergency payments into separate batches to simplify tracking. - **Funds-request timing**: Synchronizing payment triggers with PayClearly's API to prevent transaction fee mismatches. - **Implementation plan**: Development scoped for November-December pending technical assessment, with potential task splitting based on complexity. - -- ## Power BI Reporting Framework A standardized process was established to accelerate dashboard development for payment analytics. - **Requirement clarification**: Reports will advance faster with pre-defined templates specifying: - Exact data fields and slicer parameters needed. - Source system mapping (e.g., Studian vs. third-party tables). - **Centralized collaboration**: A dedicated SharePoint folder will house all report specifications, enabling real-time requirement validation before development. - **Current priorities**: - **Electronic payment conversion**: Aligning PayClearly API data pulls with direct payment system extracts to resolve reporting discrepancies. - **Status alignment**: Resolving inconsistencies between API status labels and operational payment states. - -- ## Q1 Planning and Visualization The roadmap for upcoming quarters was structured to optimize team alignment and visibility. - **Timeline adjustments**: December deliverables will be reassessed next week based on developer capacity and holiday schedules. - **Visual planning format**: Two views were proposed for tracking cross-team dependencies: - **Integrated timeline**: Consolidated deliverables across all workstreams. - **Support-focused view**: Highlights how technical resources enable specific team objectives. - **Next steps**: Refine Power BI requirements and emergency payment technical specs before the next prioritization review.
Capacity and Tracker Alignment
## Summary ### DSS System Improvements and Workflow Challenges The meeting focused on addressing critical bottlenecks in the DSS (Data Storage System) and legacy DDIS workflows. Key priorities included: - **Creating a unified Power BI view**: A dedicated view is being developed to track invoice due dates and processing statuses, aiming to replace fragmented reports like the "Meters report." This will provide real-time visibility into invoice backlogs and help prioritize urgent workloads (e.g., 50 out of 100 invoices due immediately). - **Resolving workflow complexity**: The DSS/DDIS integration suffers from convoluted status codes (e.g., *workflow ID*, *bill ID*) and non-sequential processes. For example, legacy system filters often fail, requiring manual intervention to resolve issues like misaligned client-vendor mappings or audit data errors. - **Collaborating with cross-functional teams**: Discussions highlighted the need to reduce dependency on legacy systems by aligning DSS improvements with UBM (Unified Billing Module) and Arcadia pipelines. - -- ### Capacity and Workload Management Operational capacity emerged as a critical constraint, with three overlapping priorities: 1. **Invoice processing bottlenecks**: Daily firefighting consumes ~50% of resources, delaying strategic fixes. Examples include recurring validation errors, misrouted bills, and manual account setup overrides. 2. **Urgent requests**: High-priority tasks from stakeholders like Abhinav often disrupt workflows. A proposed solution involves creating a *centralized tracker* to filter and assign requests transparently, reducing ad-hoc interruptions. 3. **Resource allocation**: Teams are stretched thin between DSS fixes, capacity scaling, and urgent tasks. For instance, resolving a single invoice error might take 4 hours of research but only 8 seconds to fix, highlighting inefficiencies. - -- ### Process Documentation and Reporting - **Mapping invoice lifecycles**: A detailed workflow diagram is underway to document bill processing steps from acquisition to output. This aims to clarify "stuck" states (e.g., audit data delays) and reduce redundant cross-team inquiries. - **Dashboard development**: A Power BI semantic model is being designed to categorize issues by team (e.g., DSS vs. Arcadia) and severity. This will replace ambiguous terms like "DSS issue" with actionable insights, such as distinguishing between validation errors and late bill downloads. - **Legacy system knowledge gaps**: Critical processes lack documentation, requiring time-consuming reverse-engineering. For example, understanding DDIS "ready to send" logic took 2+ hours of live troubleshooting due to unclear status codes. - -- ### Legacy System Challenges and Technical Debt - **DDIS workflow inefficiencies**: The legacy system’s unstructured status codes (e.g., *acquired data*, *audit account setup*) create redundant loops. For instance, reassigning a bill to a correct client ID often triggers unnecessary reprocessing. - **Risk of scaling on flawed systems**: Increasing invoice throughput without fixing DSS/DDIS issues could amplify errors. A $2,000 backlog might require a simple bulk update, but current workflows force per-invoice fixes. - **Dependency on manual interventions**: Teams rely on screenshots and Slack threads to resolve issues like misconfigured client-vendor mappings, which could be automated with CRM-driven validations. - -- ### Long-Term Strategic Goals - **Reducing legacy system reliance**: A parallel processing pipeline is proposed to bypass DDIS for routine tasks (e.g., client ID assignments), minimizing "fail-safe" dependencies. - **Arcadia integration**: Normalizing bills through Arcadia’s parsing systems could automate pre-output validations, reducing manual reviews. - **Reorganizing team interactions**: Moving toward weekly stakeholder updates (vs. daily) and assigning non-critical tasks to dedicated teams (e.g., Mary’s Ops group) to free up R&D bandwidth. The meeting underscored the need to balance immediate fixes with systemic overhauls, emphasizing that *"living day-to-day prevents investing in tomorrow."* Without clearer workflows and automation, teams risk perpetual firefighting cycles.
Batch Scaling Plan
## OpenAI Token Quota Analysis The meeting focused on analyzing OpenAI's token processing quotas and current utilization rates to identify capacity expansion opportunities. Key findings revealed significant underutilization of allocated resources, with current usage at only 8% of the 1.5 million tokens-per-minute (TPM) quota. This indicates substantial headroom for scaling invoice processing volumes without immediate quota constraints. ### Quota Structure and Limitations - **Rolling window quota**: The 1.5 million TPM limit operates on a per-minute basis, rejecting requests exceeding this threshold within any 60-second window. - **In-queue token limit**: A separate 1 billion token ceiling restricts pending/unprocessed requests, though current processing patterns show no risk of breaching this. - **Error handling gap**: No retry mechanism exists for failed requests (HTTP 429 errors), risking permanent loss of unprocessed invoices during quota spikes. ### Current Processing Patterns - **Token consumption**: Last month’s total reached 1.5 billion tokens (~78,000 requests), averaging 20k tokens per invoice. - **Peak observations**: - Largest single request: ~500k tokens (attributed to complex invoices). - Typical throughput: 15-30 invoices processed hourly with minimal clustering. - **Deployment imbalance**: Primary model (`O3-batch`) handles all traffic, while identical secondary deployment (`O3-batch-2`) sits idle at 0% utilization despite having half the in-queue capacity (500M tokens). ### Capacity Expansion Strategies - **Immediate scaling via load distribution**: - Implementing round-robin routing between `O3-batch` and `O3-batch-2` could instantly increase throughput by 50% (from ~3,000 to ~4,500 invoices daily). - Weighted distribution (e.g., 2:1 ratio favoring the higher-capacity deployment) was proposed as an alternative optimization. - **Retry mechanism urgency**: Any frequency increase requires automated requeuing logic to handle rejected requests during TPM spikes, preventing invoice loss. - **Quota expansion feasibility**: Doubling total capacity is technically viable via Azure quota requests, but deemed unnecessary given current underutilization. ### Monitoring and Safeguards - **Proactive alerting**: - Slack/webhook notifications will trigger near quota thresholds (70-80% utilization) for TPM and in-queue limits. - Focus on detecting HTTP 429 errors to identify processing bottlenecks. - **Metric accessibility**: Dashboards displaying token consumption patterns and deployment-specific utilization will be shared for real-time monitoring. ### Risk Mitigation Considerations - **Spike management**: Large invoices (e.g., 500k tokens) could exhaust TPM quotas if clustered, necessitating: - Request staggering algorithms. - Dynamic batch sizing based on token estimates. - **Deployment redundancy**: Idle `O3-batch-2` serves as failover during Azure outages, though model-level disruptions would affect both deployments.
Report
## System Status Report Analysis The meeting focused on understanding the data sources and structure of the "System Status" report to address fragmented invoice tracking. Key points: - **Data Source Investigation**: The report pulls from Power BI, but its underlying queries originate from SSIS (SQL Server Integration Services) and SSRS (SQL Server Reporting Services). Clarification is needed on whether the queries reside in Power BI or SSRS. - **Fragmentation Challenge**: Currently, invoice tracking requires checking 5-6 disparate systems (e.g., DSS, UBM, spreadsheets), causing delays and inconsistencies. For example: - Bill IDs differ across systems (UBM uses unique metrics; PayClear lacks tracking). - Critical delays occur when invoices stall in Data Services for 3-4 days, impacting bill payments for Utility UBM customers. - **Long-Term Solution**: Develop a unified report in AppSmith to visualize invoice stages end-to-end, replacing siloed trackers. - -- ## Power BI Licensing Issue A trial license alert ("Premium Per User expires in 6 days") appeared despite an active Pro license assignment. Actions include: - **License Verification**: Confirmed Pro license is assigned, but the system erroneously displays a trial warning. - **Resolution Path**: Admin permissions will be used to revalidate license allocation and suppress the alert. - -- ## Azure AI Quota Utilization The team reviewed Azure AI quotas to assess capacity for scaling invoice processing: - **Current Usage**: - O3 Batch (production) uses 1.46 million tokens against a 1.5 million token limit. - A separate O3 Batch instance (likely for dev/testing) shows minimal usage. - **Quota Ambiguity**: Uncertainty persists on whether limits are monthly or per-minute. Microsoft documentation will be re-examined, with potential vendor engagement for clarification. - **Critical Need**: Determine if near-capacity usage risks blocking invoice processing scalability. - -- ## Audit Strategy Alignment (SOC 1 & SOC 2) Discussed consolidating auditors for SOC reports across platforms (Carbon Accounting, UBM, Glimpse) to improve efficiency: - **Current Landscape**: - Carbon Accounting is undergoing SOC 2 Type 2 with Sensible. - UBM has SOC 1 Type 2 with another auditor but may require SOC 2 for future RFPs. - **Consolidation Benefits**: - Leverage entity-wide controls (e.g., policies, infrastructure, personnel testing) across platforms. - Align reporting periods to minimize redundant evidence collection. - **Next Steps**: - Share UBM’s prior SOC 1 report with Sensible to scope workload and provide a unified quote for SOC 1 + SOC 2. - Address UBM-specific financial controls (e.g., bill pay workflows) via live walkthroughs.
DSS Capacity Plan
## Current Processing Capacity and Bottlenecks The system was initially designed to handle 2,400 invoices daily but now processes approximately 3,000-3,100 invoices, leading to persistent backlogs (e.g., 10,000 in data acquisition). This prevents meeting SLAs for both bill-paying customers and historical data. Key constraints include Microsoft's throttling thresholds-exceeding them triggers multi-day slowdowns-and unoptimized batch processing that risks overwhelming capacity during peak web-upload activity. ### Capacity Management Concerns - **Threshold risks**: Increasing batch frequency (e.g., from 5 to 3 minutes) could breach Microsoft’s limits, causing system-wide slowdowns lasting days due to cascading timeouts and reprocessing needs. - **Web-upload impact**: Direct uploads (e.g., via FDG Connect) bypass queues and consume reserved capacity, necessitating a buffer to avoid tripping thresholds during high-volume periods. - -- ## Optimization Strategies for Increased Throughput Two primary solutions were evaluated to address capacity limitations: reserved Microsoft capacity and leveraging underutilized non-production resources. ### Reserved Capacity Analysis - **Cost-benefit trade-offs**: Transitioning to reserved TPUs (dedicated compute) requires a monthly commitment, but current volumes (~3,000/day) fall below the breakeven point (~12,000/day), making it economically unviable for now. - **Performance metrics**: Token utilization in Azure must be analyzed to determine safe scaling margins (e.g., operating at 70% capacity allows incremental frequency adjustments). ### Non-Production Resource Utilization - **Dev/test environment exploitation**: Non-production subscriptions offer identical, underused capacity. Redirecting a portion of processing (e.g., historical invoices) to these environments could increase daily throughput by ~50% (from 3,000 to 4,500 invoices). - **Implementation complexity**: Requires adding a discriminator to route invoices and modifying tracking logic to fetch results from the correct environment, estimated as a moderate technical lift. - -- ## Technical Improvements and System Refactoring Optimizing requests and addressing processing failures emerged as critical priorities. ### Batch Processing Efficiency - **Microsoft’s feedback**: Sending single-row files increases overhead; batching multiple files per request could improve performance but disrupts the current one-to-one error-tracking framework. - **Feasibility assessment**: Refactoring for batched inputs is deprioritized due to high effort and marginal gains, as latency would shift from processing to queue wait times. ### Error Handling and Retry Mechanisms - **Transient failures**: SQL timeouts (Error #3) affect ~342 invoices; implementing exponential-backoff retries at the database-transaction level would automate recovery without manual intervention. - **Operational gaps**: 25,000 unaddressed failure emails highlight workflow breakdowns; fixes include linking DSS statuses to legacy system completion flags to filter resolved cases. - -- ## System Integration and Workflow Disconnects Misalignments between DSS, DDIs, and UBM cause invoices to stall, with 15,000+ items stuck in "waiting for operator" status due to filtering inaccuracies and missing client/vendor data. ### Data Consistency Challenges - **Upstream data gaps**: BDE files lack client/vendor codes, causing pre-audit failures. Legacy system inconsistencies (e.g., duplicate client entries) further prevent automated matching. - **Operational process fixes**: Manual intervention is required for unresolved failures, but email alerts are ignored; reactivating notifications and clarifying ops workflows is urgent. ### Cross-Platform Synchronization - **Status tracking**: DSS doesn’t reflect UBM/legacy system completions. Adding a column to show legacy-system status would allow bulk-hiding processed invoices, clearing operational backlogs. - **Filtering improvements**: Current filters display completed and failed invoices indiscriminately; refining them would provide accurate "action required" visibility. - -- ## Long-Term Architectural Evolution Decoupling from legacy systems via a rearchitected output service is proposed to accelerate processing and reduce dependencies. ### Output Service Modernization - **Direct UBM integration**: Migrating output generation from the legacy system to DSS would use Cosmos DB data directly, skipping 7-10 legacy workflow steps and enabling near-real-time UBM feeds. - **Fallback mechanism**: Failures would reroute to the legacy system, ensuring continuity while isolating DSS for most invoices. - **Strategic impact**: This shift positions DSS as a standalone platform, critical for future scalability and operational resilience. ### Hybrid Transition Approach - **Coexistence phase**: Legacy and DSS output pathways would operate in parallel, allowing incremental validation. Legacy dependencies reduce to error-handling only, minimizing bottlenecks. - **Implementation clarity**: The output service code requires minimal adjustments to use DSS data instead of legacy sources, avoiding major redevelopment.
Queue Priorities Update
## Queue Prioritization System Updates The queue system is being restructured to prioritize live builds and specific high-value clients, with Victra identified as the initial priority client. - **Current queue adjustments**: The existing queue now exclusively handles live builds to accelerate processing, with new client-specific queues planned for imminent deployment. - **Prioritization logic**: A fixed prepay-first approach (1-day SLA) will be implemented immediately, leveraging system status report data to flag overdue items. Future enhancements may include dynamic prioritization via a dedicated UI. - **Overdue handling**: Postpay items (3-day SLA) will be escalated when exceeding their window, though invoice processing limitations require further refinement for true past-due identification. ## Ascension Client File Transfer Method A solution was finalized for Ascension to transition scanned paper bills during their migration, emphasizing efficiency and minimal manual intervention. - **Google Drive selected**: A shared client-specific folder will be created for Ascension to upload PDFs, avoiding inefficient email transfers or FTP setup delays. - **Process ownership**: Rachel Beck will manage file retrieval from the folder, with backup personnel designated to ensure continuity during peak periods. - **Temporary scope**: This method applies only to the initial transition month, with no recurring file transfers anticipated. ## Final Bill Identification for Simon Client Strategies were discussed to flag final bills within Simon's 6,700-account dataset to prevent erroneous activation of inactive accounts. - **System limitations**: No automated report exists to identify final bills pre-processing; the "final bill" flag only triggers after UVM processing and account inactivation. - **Workaround approach**: Relying on explicit "final bill" notations on invoices or client-provided closure confirmations, with manual notation during setup where possible. - **Resource constraints**: Development of automated solutions is deferred due to DSSAI workload priorities, leaving Mary's team to handle exceptions manually. ## PNG-to-PDF Conversion for Simon A technical process was outlined to convert 6,000+ PNG image links to PDFs for Simon using existing automation tools. - **Script deployment**: A pre-built script (developed by Chris/Tim) will process links in ~20-60 minutes, requiring clear output folder specifications and naming conventions. - **Duplicate management**: Expected duplicates in the dataset will be handled during processing, with Adrian providing execution support and troubleshooting. - **Access coordination**: Script access issues will be resolved via direct Teams file sharing if repository permissions are delayed. ## Share Drive Storage Management Capacity constraints on the DS Share Drive prompted discussions on retention policies and storage solutions. - **Retention ambiguity**: No formal SLA governs setup invoice retention, though audit compliance concerns warrant caution before deletion. - **Space recovery options**: Proposals included deleting legacy setup invoices from completed onboardings or exploring storage expansion, pending Terra's guidance. - **Platform SLA alignment**: Post-setup invoices covered by platform SLAs may be prioritized for deletion, while isolated setup documents require policy clarification.
DSS Priority Review
## Backlog Challenges in Data Services Significant processing delays are causing critical backlogs in prepay and UEM customer invoices, risking late fees and service disconnections. - **Prepay/UEM prioritization**: Immediate focus required due to severe operational impact and customer relationship risks. - **Root cause analysis**: System originally designed for 2,400 daily invoices now handles 3,100+, exceeding capacity by ~30%. - **Urgency factors**: 6,528 invoices currently stuck in data acquisition queue with only 1,700-2,000 processed daily. ### Processing Bottlenecks DSS system limitations and workflow inefficiencies compound backlog issues. - **First-in-first-out flaw**: Current non-prioritized processing delays critical live bills for postpaid/historical invoices. - **Data quality failures**: 28% of invoices require manual correction due to validation errors, creating 10x more work than anticipated. - **Duplicate influx**: 338+ duplicate invoices from web/mail sources require deletion, wasting processing resources. ## Strategic System Improvements Operational changes aim to accelerate invoice processing and prioritize critical workloads. ### Queue Restructuring Three-tiered prioritization model being implemented immediately: - **Live bills queue**: Prepay/UEM invoices receive top priority to prevent service disruptions. - **Priority customer tier**: Key accounts like Victra/Medline manually elevated after live bills. - **Historical batch processing**: Non-urgent invoices deferred until critical queues clear. ### Capacity Optimization Technical enhancements target 20-30% throughput increase: - **Architecture review**: Investigating database/indexing constraints causing nightly processing drops to 600 invoices. - **Unknown invoice resolution**: Addressing thousands of stalled "unknown" status invoices blocking pipeline flow. ## Critical Customer Focus Specific high-priority accounts require immediate manual intervention: - **Victra/Park National**: Top-priority processing to prevent financial penalties. - **UEM legacy clients**: Expedited handling for foundational customers. - **Sheets/Royal Farms**: Manual queue-jumping via data audit bypasses. ## Anomalous New Account Surge 1,300+ unexpected new accounts demand investigation and special handling: - **Unprecedented volume**: Primarily from property management firms (Aspen/Bayshore/Caliber Living), suggesting systemic enrollment failure. - **Operational deviation**: New accounts bypassing standard enrollment workflows require manual reprocessing. - **Suspected origin**: Final bills from tenant transitions potentially causing abnormal mail/inbox submissions. ## Performance Monitoring Plan Metrics-driven approach established to track resolution progress: - **Queue clearance tracking**: Live bill prioritization effectiveness measured via nightly processing rates. - **Capacity enhancement timeline**: Technical solutions for throughput increase targeted within 24-48 hours. - **Critical account dashboard**: Dedicated visibility into Victra/UEM/Medline invoice clearance status.
CDS/DSS - Daily Stand Up 2025
## Summary ### Data Center Expansion Concerns The meeting addressed significant concerns about data center proliferation in Wisconsin, particularly near Madison and along Lake Michigan. Key issues include: - **Resource exploitation**: Data centers consume substantial water and electricity while driving up local utility costs without providing proportional community benefits or employment opportunities. - **Land use conflicts**: These facilities are targeting small towns experiencing population decline, competing with urgent needs for housing development land. Local resistance is mounting due to perceived resource mismanagement. - **Regional comparisons**: Similar patterns were noted in Northern Virginia, with suggestions that economically struggling areas like West Virginia might benefit more from such investments. ### Local Political Tensions Discussion highlighted volatile governance issues in suburban communities: - **Harassment by officials**: Town board members have engaged in retaliatory actions against residents, including workplace intimidation and personal confrontations over policy disagreements on social media. - **Recall efforts**: Multiple recall elections have occurred to remove problematic officials, though one remains in power despite community efforts. ### Invoice Lifecycle Reporting Challenges A major focus was understanding the invoice tracking system's complexities: - **Report limitations**: Existing Power BI reports (sourced from an SQL Server production database) only partially track invoice statuses, lacking end-to-end visibility from ingestion through payment processing. - **Data accessibility barriers**: Current systems require accessing five separate reports to determine an invoice's full status, creating operational inefficiencies. - **Unified reporting initiative**: Plans are underway to consolidate data into a centralized system, though database access concerns exist due to its production environment status. Expertise from SQL specialists will be leveraged to develop this solution. ### Administrative Priorities Upcoming organizational tasks were outlined: - **Overdue training**: Mandatory LMS compliance training (originally due in September) must be completed immediately. - **Performance reviews**: Preparation for year-end evaluations is required, including goal updates and documentation for the annual review cycle. HR has provided specific guidelines and deadlines for this process.
Invoice Ops Alignment
## Summary The meeting focused on critical operational challenges and strategic improvements for invoice processing. Key topics included current capacity constraints, with the system handling approximately 3,000 invoices daily, and plans to prioritize live builds while developing a secondary priority queue for urgent cases. Significant issues were identified with invoices getting stuck during assessment due to technical failures or missing data (e.g., disconnection notices lacking start dates), leading to unreported bottlenecks. Additionally, gaps in data visibility emerged, such as invoices not linked to clients/vendors during processing, causing inaccuracies in system status reports. The need for better cross-team coordination was emphasized to resolve data discrepancies and align expectations across systems like CRM, DSS, and billing platforms. ## Wins - **Improved DSS Filtering**: New filters were added to identify failing invoices, enabling proactive intervention (e.g., addressing 30 stuck invoices flagged last Friday). - **Queue Prioritization Strategy**: A clear framework was established to prioritize live builds over historical data, with plans to introduce a secondary priority queue for high-urgency cases. ## Issues - **Processing Capacity Limitations**: Current capacity (3,000 invoices/day) is insufficient for peak volumes, risking delays during high-inflow periods (e.g., month-end). - **Assessment Failures**: Technical errors in DSS cause invoices to stall without alerts, requiring manual intervention (e.g., ~2,000 invoices currently needing review). - **Data Inconsistencies**: - System status reports show "meters" instead of invoices, skewing volume metrics (e.g., 10,000 meters may represent far fewer invoices). - Invoices lack client/vendor assignment when stuck mid-process, leading to "unknown" entries in reports. - **Tooling Deficiencies**: - DSS filters malfunction (e.g., OR logic instead of AND), hindering targeted troubleshooting. - No enforcement of file-naming conventions for third-party submissions, causing unmapped clients. - **Reporting Blind Spots**: Critical metrics (e.g., late fees, emergency payments) cannot be traced back to specific clients, creating reconciliation challenges. ## Commitments - **Invoice Processing Strategy**: *Gorav and Shake* to discuss scaling solutions at the 4 PM meeting today. - **System Documentation**: *Matthew and Merchant* to collaborate on documenting the system status data source and access protocols. - **DSS Filter Fixes**: *Team* to repair broken DSS filters this week to enable accurate data segmentation. - **Operator Engagement**: *Operations* to allocate 5-10% bandwidth to address stuck invoices in DSS starting next week.
UBM Remediation Plan
## System Performance and Bulk Update Challenges Significant slowdowns occurred in the UBM system due to concurrent bulk location updates by multiple users, causing job failures and unresponsive interfaces. Key observations include: - **Resource contention**: Simultaneous updates by OPS and CSM teams during peak hours triggered system locks, preventing successful data processing even during off-peak times. - **Interface issues**: Users experienced persistent gray screens and unrefreshed pages despite changes appearing in logs, suggesting potential admin privilege gaps or unresolved system bottlenecks. - **Mitigation strategy**: Proposing hourly user limits (e.g., ≤10 concurrent updaters) to prevent overload, with ongoing monitoring to validate effectiveness. ## Recurring Data Mapping Issues Persistent errors (e.g., 3018 "bill block not mapped") require systematic solutions for virtual account-location alignment: - **Current workaround**: Exporting unmapped virtual accounts and locations across all customers for manual cross-referencing, enabling batch script imports to resolve mismatches. - **Matching logic**: For complex cases like Banyan, matching relies on street numbers + unit identifiers, while simpler cases use street numbers only. - **Automation potential**: Developing scripts to auto-resolve matches where address patterns are consistent (e.g., unit-number-based matches) could reduce manual effort. ## Account Attribute Management Payment file errors stem from missing geocodes and utility-type attributes, highlighting systemic gaps: - **Attribute assignment flaws**: Current account-level attribute linking fails when new accounts (e.g., hyphenated variants) lack inherited attributes from grouped accounts. - **Proposed solution**: Shifting attributes to utility types (e.g., electric/gas) instead of accounts would auto-apply codes to new bills, eliminating ~90% of errors. - **Transition plan**: Editing payment files to reference utility-type attributes instead of account attributes, then deprecating obsolete account-level data. ## Invoice Discrepancy Resolution Daily handling of "total charges not adding up" errors involves targeted adjustments with critical exclusions: - **Bulk adjustment protocol**: Adding manual adjustment lines for single-location bills, excluding: - Summary bills (multi-location invoices risking misallocated charges across cost centers). - Sensitive clients (e.g., Sheets/Royal Farms) where blanket adjustments disrupt financial reporting. - **Volume management**: Prioritizing sub-800 error batches for efficiency, as larger volumes increase match complexity. ## Process Optimization Framework Establishing sustainable workflows for recurring issues: - **Centralized tracking**: Maintaining a live list of high-frequency errors (e.g., 3018 mismatches, charge discrepancies) for prioritized scripting/batch fixes. - **Automation roadmap**: Exploring scripts for rule-based virtual account mapping (e.g., Banyan’s unit-number logic) to reduce manual reviews. - **Execution rhythm**: Daily resolution for charge discrepancies and weekly bulk mapping, adjusted based on error volume and match complexity.
DSS Plan
## System Overview and Transition Goals The meeting focused on the current state of invoice processing systems, highlighting the coexistence of the legacy DDAS application and the newer DSS platform. DDAS, a 20-year-old .NET Windows system, faces frequent stability issues and requires replacement, while DSS is still evolving toward feature parity. The ultimate objective is to fully retire DDAS by ensuring DSS can handle all functionalities, though dependencies between the systems persist-such as shared data points where DSS pulls status updates from DDAS. ### Legacy System Challenges - **Critical instability**: DDAS experiences weekly failures due to outdated architecture, necessitating urgent migration efforts. - **Feature gap**: DSS lacks certain capabilities present in DDAS, preventing immediate decommissioning of the legacy system. ### Modernization Strategy - **Redesigning workflows**: Historical DDAS constraints shouldn’t dictate DSS design; workflows must be re-evaluated for efficiency. - **Documentation priority**: Comprehensive process mapping for both systems is essential, involving collaboration with DDAS experts and end-users. ## Integration Challenges with UBM A significant pain point involves data handoffs between DSS and UBM, an archaic downstream system. UBM requires rigidly formatted data that DSS doesn’t natively support, causing frequent failures. ### Key Issues - **Data translation failures**: Mismatches occur when DSS output doesn’t align with UBM’s expectations, leading to processing errors. - **Architectural solution**: A middleware "translator" layer will be developed to reformat DSS data for UBM compatibility, avoiding disruptive changes to DSS core logic. ## DSS User Interface Enhancements Immediate improvements target the DSS UI to streamline invoice error resolution and data filtering. ### Error Visualization - **Highlighting critical fields**: Missing data (e.g., billing start dates) will be flagged in red within the UI, enabling operators to quickly identify and rectify errors without manual searching. - **Implementation logic**: The system will map database error codes directly to UI elements, dynamically highlighting affected rows. ### Filter System Audit - **Inconsistent results**: Filters like "Live Bills" fail to capture all relevant records due to naming variants (e.g., "Live_Bills" vs. "Live Bills"). - **Comprehensive review**: Each filter’s logic, data source, and query accuracy will be validated to ensure complete and correct results. ### Data Source Migration - **Status display overhaul**: Shift from Cosmos DB to the common SQL Server database for status information, aligning with DDAS data and improving reliability. - **Filter logic correction**: Change multi-criteria filters from "OR" to "AND" to refine search results. ## Knowledge Transfer and Collaboration The team will prioritize understanding existing systems through hands-on exploration of DSS components and documentation gaps. ### Approach - **Targeted learning**: Investigate ambiguous status labels (e.g., "Failed Document Batch Sticker") and workflows to clarify their purpose and origin. - **Cross-system insights**: Challenge legacy assumptions inherited from DDAS to modernize DSS’s design principles. ## Operational Logistics Daily syncs are scheduled for 9 AM Eastern Time to track progress, with flexibility for ad-hoc sessions. Environment setup issues (e.g., local development hurdles) will be resolved through credential verification and Azure configuration.
DSS Check-in
## DSS Processing Challenges and Invoice Stuck Issues The meeting focused on persistent challenges within the Document Scanning System (DSS), where invoices are failing to process correctly, leading to operational bottlenecks. Key issues include invoices stuck in pre-audit queues due to missing data, incorrect configurations, and system errors. Approximately 3,000 invoices are currently impacted, with failures stemming from multiple root causes rather than a single point of failure. ### Pre-Audit Step and Manual Intervention Requirements The DSS pre-audit step requires operator intervention to resolve errors, but inconsistencies in status tracking complicate prioritization. - **Status inaccuracies**: Workflow labels like "complete" or "in process" do not reliably reflect an invoice’s actual state, necessitating manual verification. - **Error backlog growth**: Failed items accumulate due to unresolved issues like missing client IDs or unit-of-measure mismatches, requiring systematic review. - **Operational dependency**: Technical fixes alone are insufficient; operations teams must manually address errors such as invalid service descriptions (e.g., "school tax" instead of "tax pr"). ### Critical Error Patterns and Technical Failures Specific recurring errors prevent invoices from advancing through DSS, with some requiring reprocessing while others need manual correction. - **Missing client/vendor codes**: Over 1,000 invoices lack client IDs or vendor codes because DSS skips final assignment steps when earlier processing stages fail. - **Transient system errors**: Issues like "SQL service connection busy" or "unable to import invoice templates" can often be resolved through automated reprocessing. - **Data validation gaps**: Invoices from file-watcher sources sometimes omit client IDs, causing pre-audit failures despite not being required at upload. ### Unknown Client/Vendor Backlog Discovery A hidden backlog of ~2,000 invoices assigned to "unknown" clients/vendors was discovered, contributing to missed customer bills and payment delays. - **Unreported accumulation**: These invoices went unnoticed due to lack of tracking in system reports, with some dating back to June. - **Customer impact**: Bills for pay customers were delayed by weeks, triggering complaints and potential service disruptions. - **Duplication risk**: Reprocessing these invoices may create duplicates, requiring manual cleanup to avoid billing errors. ### Root Cause Analysis and Mitigation Strategies The team is prioritizing root-cause fixes over temporary reporting solutions, focusing on systemic improvements. - **Client ID resolution**: Investigating why client IDs are missing from file-watcher invoices and whether matching logic can be improved during pre-audit. - **Vendor-specific workflows**: For delivery-based vendors (e.g., propane), invoices may need rerouting to data acquisition instead of account setup to handle date-related failures. - **Interpreter layer enhancements**: Long-term fixes will embed logic between DSS and UBM to handle edge cases (e.g., defaulting missing dates to service periods). ### Operational Priorities and Monitoring Immediate efforts target reducing the backlog while evaluating the need for enhanced monitoring. - **Backlog triage**: Reprocessing low-hanging-fruit errors (e.g., SQL timeouts) while manually investigating complex cases like missing client IDs. - **Visibility trade-offs**: Adding "unknown" invoices to system status reports will be considered only if root-cause fixes prove insufficient within days. - **Preventive measures**: Validations will be strengthened to ensure critical fields like client IDs are populated before DSS processing.
DSS Issues
## File Watcher Processing Limitations The team identified critical gaps in the File Watcher system-a process that automatically ingests bills from an FTP share location. Currently, it lacks support for vendor/client exclusions and relies entirely on a specific file naming convention (`$client_id$` prefix) to extract client identification. When filenames omit this convention, bills default to "unknown" client/vendor status, causing processing failures. This limitation prevents automated exclusion routing and forces manual intervention for affected bills. ### Naming Convention Enforcement - **Immediate operational change required**: Operations must enforce strict filename formatting (including `$client_id$`) during FTP uploads to ensure proper identification. - **System limitation**: File Watcher only uses filenames for client/vendor mapping-no fallback to OCR or bill content exists, making naming compliance non-negotiable. ## DSS Processing Failures and Stuck Bills Approximately 3,144 bills are stranded in DSS due to diverse errors, with 1775 flagged for "unknown" client/vendor issues. These bills remain in "waiting for system" or "in process" states indefinitely because DSS delays setting client/vendor codes until its final processing step. If errors occur earlier (e.g., during data validation), codes remain unassigned, and bills stall without alerting users. ### Error Root Causes - **Common failure points**: - Invalid unit of measure errors (e.g., unrecognized measurement units in bill data). - Database connection timeouts during invoice template imports. - File system throttling issues blocking Cosmos record updates. - Missing client/vendor data in API inputs. - **UI/UX gap**: DSS doesn’t surface errors prominently-users must manually inspect bill details to discover issues, leading to oversight. ## Client/Vendor Code Assignment Mechanics Client/vendor codes are assigned exclusively in DSS’s final processing stage (`Import Invoice Build Data Queue`). Earlier steps rely on inputs from IPS (Intelligent Processing Service) or filename parsing, but failures in intermediate steps (e.g., account validation) prevent code assignment. The system *does not retroactively update* these codes post-failure, leaving bills in "unknown" limbo until manually resolved. ### Data Flow Challenges - **No retroactive updates**: Initial API-provided client/vendor codes display as "null" in UIs even if later derived from filenames or OCR, creating confusion. - **Account dependency**: Codes are pulled from account records only after successful account matching-a step vulnerable to errors in vendor/client identification. ## System Disconnect and Visibility Issues DSS operates in isolation from core systems, creating blind spots in status tracking. The "waiting for system" filter in UIs often misfires, and database queries return inconsistent counts due to: - Lack of real-time synchronization between DSS and main databases. - Duplicate bill entries from reprocessing attempts. This disconnect complicates error diagnosis and inflates perceived backlog numbers (e.g., 3,144 stuck bills include duplicates). ### Impact on Operations - **Unactionable alerts**: "Unknown client/vendor" flags don’t distinguish between new bills in progress and genuinely stuck items. - **Manual triage burden**: Operations must inspect individual bills to uncover errors like unit mismatches or connection failures. ## Backlog Resolution Strategy The team prioritized tackling the 3,144 stuck bills by error type volume. Immediate steps include: 1. **Data extraction**: Query Cosmos DB to group bills by error message (e.g., "invalid unit of measure," "connection timeout"). 2. **Systematic fixes**: - Retry bills throttled by file system issues. - Batch-update unit of measure errors. - Reprocess bills with transient failures. 3. **Process adjustments**: Exclude File Watcher-uploaded bills from automated processing until naming compliance is enforced. ### Long-Term Considerations - **Enhanced error visibility**: Redesign DSS UI to highlight errors upfront instead of burying them in detail views. - **Earlier code assignment**: Explore moving client/vendor identification to earlier processing stages to prevent "unknown" cascades. - **System integration**: Synchronize DSS statuses with central databases to improve monitoring accuracy.
Data Pipeline Review
## Data Acquisition Queue Analysis The data acquisition queue currently contains approximately 2,000 pending invoices, with processing occurring every 5 minutes in batches of 10 invoices. System monitoring indicates stable throughput despite the backlog. - **Processing rate validation**: Daily capacity calculations confirm the system handles ~2,880 invoices (24hrs × 60min ÷ 5min intervals × 10 invoices), aligning with observed transaction logs showing ~3,000 processed daily. - **Backlog dynamics**: Queue levels fluctuate between 1,500-2,000 items due to continuous inflow, indicating sustained processing rather than stagnation. - **OpenAI dependency**: The primary processing bottleneck is OpenAI API response times, though current throughput meets configured system capabilities. ## Invoice Processing Workflow Invoices undergo multi-stage transformation from raw input to legacy system integration, with critical quality checks throughout. - **Initial extraction**: OCR converts invoices to text, which OpenAI's LLM structures into standardized key-value pairs. - **Canonical conversion**: Standardized data transforms into system-specific formats using historical patterns where possible. - **Historical matching**: Prior invoice treatments inform current processing, failing when line-item discrepancies or OCR errors create mismatches. - **Audit & integration**: Post-processing includes validation against legacy system constraints before final data population. ## System Optimization Proposals Two key enhancements address throughput and prioritization challenges in the processing pipeline. - **Priority queuing**: Implementing separate queues for live (urgent) versus historical invoices would ensure time-sensitive invoices process first through query logic adjustments rather than FIFO processing. - **Backlog investigation**: A significant discrepancy exists between reported 5,000 pending invoices and the observed 2,000 in queue, indicating ~50% of invoices require investigation into ingestion failures or misclassification. ## Audit Queue Challenges Approximately 4,000 invoices are stuck in failed pre-audit status with diverse errors requiring operational intervention. - **Error resolution complexity**: Fixes require business process knowledge (e.g., validating "service charge-PR" errors) rather than technical solutions alone. - **Status confusion**: New "Audit Data" workflow statuses lack documentation, complicating issue diagnosis between legacy and DSS systems. - **Ownership gap**: Operational teams must address these failures since they stem from business rule mismatches rather than system defects. ## System Integration & Future Direction The DSS-legacy system handoff involves sequential data population steps, with opportunities to streamline output. - **Integration touchpoints**: Critical interactions include bill/document batch creation, audit validation, and multi-step legacy data hydration before final output. - **Duplicate risk**: Manual legacy system uploads could create duplicates if parallel to DSS processing, requiring procedural safeguards. - **Output optimization**: Prioritizing direct DSS-to-UBM output would bypass legacy system dependencies, contingent on UBM accepting standardized formats.
[EXTERNAL]FW: [EXTERNAL]Constellation: Monthly Sync
## Summary ### Current System Performance and Monitoring The meeting began with an update on system performance, noting isolated slowdowns in voice processing observed the previous week. While no active issues were currently present, the team emphasized the need for proactive guardrails to monitor queue buildup and system responsiveness. - **Performance monitoring strategy**: Implementing alerts for queue anomalies and evaluating adjustments to the processing pipeline to prevent future latency issues. - **Historical context**: Past incidents highlighted the importance of real-time monitoring, especially as invoice processing volumes are expected to increase significantly in the coming months. ### Token Processing and Scalability Challenges A detailed discussion unfolded around token processing bottlenecks in the Azure OpenAI batch system, where delayed responses were causing queue congestion. The current batch-based approach operates under a 24-hour SLA but typically delivers results in 10-15 minutes. - **Token throughput limitations**: The system currently handles millions of tokens, but scalability concerns emerged due to anticipated growth in monthly invoice volumes. - **Cost-latency trade-offs**: The team explored provisioned throughput (PTU) as a solution, which dedicates compute resources to guarantee consistent latency. For the Global Data Zone deployment, 15 PTUs yield 4,500 input tokens per minute at a fixed monthly cost, contrasting with pay-as-you-go models where shared compute causes variability. - **Investigation needed**: Calculating current token consumption is critical to determine if PTU’s guaranteed throughput justifies its cost compared to existing batch or pay-per-use models. ### Architectural Planning for Energy Manager Agent A new project to develop an "Energy Manager Agent" was introduced, aimed at automating customer interactions using market data, contract details, and usage history. The agent would proactively advise clients on pricing opportunities or contract renewals via email, chat, or voice. - **Core functionality**: The agent will synthesize real-time market intelligence (e.g., price fluctuations) with customer-specific data (e.g., contract end dates) to trigger actionable insights, such as recommending cost-saving contract renewals. - **Data architecture**: Decoupling from legacy systems by caching enriched data (e.g., pre-processed customer profiles) in a dedicated store like Cosmos DB, refreshed via APIs. The team seeks guidance on optimizing data retrieval frequency and minimizing real-time computation. - **Prioritization**: Focus initially on the "brains" (reasoning engine) before addressing communication channels. Architectural patterns from similar energy-sector implementations will be evaluated. ### Industry Adoption of AI Agents The conversation shifted to broader industry trends, highlighting rapid agent adoption in internal operations (e.g., HR workflows), though customer-facing deployments remain cautious. - **Efficiency gains**: One financial services firm reported reducing customer service agents from 4,000 to under 1,000 within a year by using AI for call handling, sentiment analysis, and escalation routing. - **Regulatory and market readiness**: Industries like energy and finance face unique hurdles in customer-facing agent deployment, prompting phased rollouts (e.g., starting with chat interfaces before voice). ### Strategic Next Steps Immediate actions include a deep dive into token metrics for PTU feasibility and a dedicated session to architect the Energy Manager Agent. - **Technical exploration**: Analyzing Azure OpenAI resource metrics to quantify token usage and model costs (e.g., comparing GPT-3.5 vs. GPT-4). - **Cross-functional collaboration**: Scheduling a design workshop with Azure specialists to review data-caching strategies, authentication, and agent orchestration patterns. - **Broader alignment**: Involving internal Constellation teams to align the agent project with parallel AI initiatives in other business units.
DSS/UBM Errors
## Summary ### Billing Error Resolution Processes Key billing errors were analyzed with proposed solutions to improve data accuracy and processing efficiency. - **Error 2031 (Days of Service Mismatch)**: Resolved by prioritizing read dates over service period dates for calculations. If read dates are unavailable, service period dates will be used. Discrepancies arose when days of service values didn’t align with date ranges, impacting downstream reports. - **Error 2035 (Invalid Line Items)**: Addressed distribution-only charges (e.g., demand use) appearing in supply-only blocks. Solutions include upstream fixes in DSS to ensure correct observation codes (e.g., `gen_dem_use` for supply blocks) and downstream UBM edits for existing data. Royal Farms cases highlighted ongoing double-counting issues needing manual cleanup. ### Output Processing Challenges Critical bottlenecks in invoice processing were identified, affecting prepaid and postpay workflows. - **14.7K Invoices Stuck in "Ready to Send"**: Only 1,200 were processed recently despite automated systems. Manual intervention is currently required due to failures in the output service, forcing operators to process invoices individually instead of batch-sending. - **Root Cause Investigation**: The automated process isn’t triggering event logs or picking up batches, with no clear errors. Afton noted this disproportionately impacts high-volume accounts, risking delayed payments and disconnections. ### Data Acquisition and DSS Processing Backlogs in data ingestion stages were linked to staffing and system limitations. - **2.8K Invoices in Data Acquisition**: Attributed to OCR/DSS IPS processing limits (10 invoices/5 minutes) and understaffing. Afton confirmed only three data processors handle queues, with contractors pending onboarding. - **Account Setup Issues**: 1,200 invoices moved to "Account Setup" lacked vendor data, suggesting possible misrouting during troubleshooting. ### System Improvements and Knowledge Transfer Strategic initiatives focused on documentation and onboarding to address recurring bottlenecks. - **Documentation Overhaul**: New developers will map end-to-end workflows (e.g., DSS JSON pipelines) and error-code resolutions in Confluence. This aims to clarify terms like "audit" and "data acquisition," reducing diagnostic delays. - **Arcadia Integration Planning**: Emphasized aligning JSON outputs with DSS requirements. Developers must distinguish between Arcadia-specific errors (e.g., webhook failures) and mapping issues, with dedicated support channels for rapid resolution. - **Microsoft OpenAI Optimization**: Discussed investigating API limitations and scalability options to accelerate data acquisition processing. ### Operational Priorities Urgent actions include clearing prepaid invoice backlogs and enhancing system resilience. - **Prepaid Invoice Clearance**: Manual processing prioritized for 1,300+ prepaid invoices to avoid service disruptions. - **Developer Onboarding**: New engineers will tackle tactical fixes (e.g., DSS audit interfaces) while building institutional knowledge via supervised documentation. Weekly syncs with senior engineers will accelerate context transfer. —————————————————————————————————————————————— ## **Summary** ### **Developer Onboarding and Institutional Knowledge** The team discussed the critical need to invest time upfront in documentation and knowledge transfer to address recurring system bottlenecks. Key themes included: - **Documentation as a foundation**: New developers should map end-to-end workflows (e.g., DSS JSON pipelines, error-code resolutions) in Confluence to clarify terminology and reduce diagnostic delays. - **Structured onboarding approach**: Rather than throwing developers into fires immediately, assign them research tasks to document flows, codes, and system dependencies. This builds institutional knowledge while freeing up senior engineers (like Shake) for verification and guidance. - **Terminology standardization**: Terms like "audit," "data acquisition," and "read dates" mean different things across teams. A centralized Confluence page should define these clearly. ### **System Architecture and Workflow Documentation** Significant gaps exist in understanding the current system architecture, particularly around DSS, IPS, and UBM interactions. - **Workflow status confusion**: The team lacks a clear, sequential representation of how bills move through the system (e.g., Data Acquisition → DSS IPS → Duplicate Resolution → Account Setup → Output). - **Code-to-documentation alignment**: Existing Whimsical diagrams may be outdated or incomplete. New developers should validate these against actual code and update Confluence accordingly. - **IPS knowledge transfer**: The Invoice Processing Service is a critical but siloed component. Dedicated sessions are needed to document its role, throughput assumptions, and SLA expectations. ### **Arcadia Integration and Error Management** Planning for the Arcadia integration requires clear ownership and error categorization. - **Error source distinction**: Developers must distinguish between Arcadia-specific errors (e.g., webhook failures), mapping issues, and problems originating elsewhere (e.g., VDE). - **Dedicated support channels**: Establish clear escalation paths for Arcadia-related issues to enable rapid resolution with their team. - **Interpreter layer**: Align JSON outputs between Arcadia and DSS to ensure seamless data flow and minimize downstream errors. ### **Microsoft OpenAI API Optimization** The team identified the need to investigate API limitations and scalability options. - **Current bottleneck**: DSS IPS processing is limited to 10 invoices per 5 minutes, creating backlogs in data acquisition. - **Action items**: Review Microsoft API limits, explore higher-tier options, and determine if architectural changes on the Constellation side can improve throughput. - **Timeline**: By December, new developers should understand current API usage, limits, and recommendations for scaling. ### **Operational Priorities and Weekly Syncs** The team committed to structured, recurring touchpoints to accelerate knowledge transfer and decision-making. - **Weekly 1-hour RCA and knowledge-transfer sessions**: Defined agenda focusing on root cause analysis and institutional knowledge building. - **DSS UI and JSON output overview**: Senior engineers will demonstrate DSS functionality, showing how bills flow from PDF input to JSON output, with emphasis on throughput and SLA assumptions. - **New developer assignments**: Tactical fixes (e.g., DSS audit interfaces) paired with supervised documentation to build context while delivering value. ### **Strategic Expectations and Constraints** Clear communication about resource availability and long-term vision. - **Gaurav's involvement**: Available at selective capacity for RCA, knowledge transfer, and Arcadia integration planning through year-end. Weekly meetings recommended to maintain alignment. - **Arcadia timeline**: Ingestion is straightforward, but correct processing requires significant work. New engineers will support this effort, but full integration will take time. - **Documentation ownership**: Changes to production systems must include corresponding Confluence updates. This becomes a tracked responsibility to prevent knowledge gaps. —————————————————————————————————————————————————- ## **New Engineer Onboarding Plan** **Immediate Focus Areas:** 1. **Documentation and Knowledge Mapping** - Map end-to-end workflows (e.g., DSS JSON pipelines, error-code resolutions) in Confluence - Document system flows, codes, and dependencies to clarify terminology like "audit" and "data acquisition" - Create the first version of documentation frameworks, which senior engineers (like Shake) will then validate and refine 2. **Tactical Fixes with Supervised Learning** - Assign tactical fixes (e.g., DSS audit interfaces) while building institutional knowledge - Work under supervision to understand business logic and system dependencies - Focus on learning through doing rather than just reading documentation 3. **Weekly Knowledge Transfer Sessions** - Set up weekly 1-hour RCA (Root Cause Analysis) and knowledge-transfer sessions - Senior engineers will show DSS UI and JSON output, explaining throughput and SLA assumptions - Define a clear agenda for each session 4. **Structured Research Tasks** - Rather than jumping into fires immediately, have them research and document: - System flows and codes - Error-code resolutions - Business requirements and system dependencies - This allows them to build context while freeing up senior engineers for verification 5. **Parallel Learning Model** - Things Shake knows → reach parity on these - Things Shake doesn't know → learn together - Things Shake doesn't know and won't know → new engineers start picking these up The overall philosophy is to invest time upfront in documentation and knowledge transfer to prevent recurring bottlenecks and reduce dependency on individual team members.
Backlog status and updates
## Summary ### Developer Onboarding Challenges Efforts to integrate new developers faced obstacles due to VPN access limitations and insufficient documentation for the FDG environment. The approach shifted toward asynchronous work via task lists to avoid introducing dependencies prematurely, with a focus on assigning straightforward UI-related tasks to free up senior resources for complex logic work. ### Data Audit Queue Spike Analysis A sudden increase in the data audit queue (reaching ~3,900 invoices) was traced to mismatches between total charges and total amounts on bills. This routing mechanism-designed to flag discrepancies-accounted for 75% of the queue's volume. The spike highlighted gaps in monitoring during peak UDM backlog periods. ### Resolution Strategy for Subtotal Mismatches Invoices with subtotal discrepancies will be redirected to UBM for resolution, leveraging its interface that displays charge differences for efficient correction. This avoids burdening data entry teams with forensic bill analysis. Exceptions for clients like Sheets and Royal Farms (who previously objected to automated adjustments) will be handled separately. ### Processing Capacity and Backlog Management Current daily processing capacity stands at 2,400 invoices, but a backlog of 13,000 items in "ready to send" status creates operational strain. Manual intervention is required to move invoices to UBM due to system constraints, with unit-of-measure errors causing additional blocking. ### Arcadia Integration Planning Preparations are underway to onboard Arcadia for bill scanning and API-based ingestion, targeting Banyan customers first. Their solution provides OCR/PDF processing comparable to current DSS functionality, requiring normalization of data fields and utility-account mapping for seamless integration. ### System Logic Corrections A critical flaw was identified where DSS incorrectly validated line-item charges against "total amount due" rather than "total current charges," causing erroneous audit routing. This logic was corrected to prevent future mismatches, and 5,000 affected invoices are being moved to UBM for reprocessing. ### Operational Bottlenecks Persistent issues prevent automated movement from "ready to send" to UBM, forcing manual intervention and creating workflow delays. Additionally, older invoices with unresolved unit-of-measure errors continue entering the system despite recent fixes, indicating potential snapshot synchronization problems.
Daily Progress Meeting
## Sprint Progress and Development Updates The sprint began with an 82% development capacity against a planned 122 story points, though the total workload reached 169 points. By the meeting, 132 points were completed, with additional progress expected before sprint closure. Key accomplishments included enhanced batch testing and account linking workflows, which contributed to the higher-than-anticipated workload. ### Report Development and Validation - **Center report deployment**: Successfully moved to QA testing after overcoming CircleCI infrastructure issues. - **Count ID report enhancements**: Modifications completed, with MedXL confirming acceptance-no further changes required. - **AP file error handling**: Implementation finalized pending PR submission, focusing on robust failure management for AP file processing. ### User Experience and System Configuration - **User preference tracking**: Added a new field to the user table for preference storage, with backend updates to display and save settings. - **Validation error rules audit**: Created an audit log table to track changes to error rules, though direct database modifications remain untracked. - **Error page backend**: Patched error rules and implemented batch processing to prevent endpoint failures during large-scale operations (e.g., 300+ entries). ### Bug Fixes and Feature Refinements - **Date handling correction**: Resolved an issue where "day of service" fields were incorrectly processed as text instead of numeric values. - **Service zip autocompletion**: Implemented logic to autocomplete empty service zip fields using location data during bulk reparse, avoiding overrides of existing entries. - **Barclays/Bargaining fixes**: Addressed frontend and backend issues identified during testing, including Jira ticket integration for transport failures. ### Performance and System Stability - **Bulk reparse concerns**: Investigated potential performance impacts during large-scale operations (e.g., 500-600 concurrent reparses). Preliminary analysis suggests minimal latency, but stress-testing tools will be prototyped for CI pipeline integration. - **Session management issues**: Reports of frequent user sign-outs during peak hours (mid-afternoon US time) potentially linked to increased user concurrency or SSO configurations. Mitigations include: - Adding a queue consumer to parallelize bulk operations. - Investigating token expiration mismatches in multi-tab workflows. - Monitoring cookie behavior post-SSO changes. ### Batch Processing and Error Logging - **Batch ingestion improvements**: Redesigned error tracking for CSV/file ingestion failures, including: - New email templates highlighting ingestion-level errors. - Centralized job logs displaying run timestamps, entities processed, and failure details. - Automated Jira ticket creation for recurring failures to prioritize fixes. - **Error grouping logic**: Email alerts triggered per cron job run to avoid spam, excluding historically resolved failures unless reoccurring. ### SSO and Security Setup Initial SSO configuration explored, with further integration planned alongside session management troubleshooting.
DSS Queue Issues
## Summary The discussion centered on critical challenges within the invoice processing workflow, primarily involving delays and inaccuracies across systems. Key issues include persistent slowdowns in the OpenAI/Microsoft processing queue, though optimizations are now available to mitigate this. A significant backlog exists, with approximately 3,000 invoices stuck in DSS due to pre-audit failures-these stem from validation issues like unidentified vendor codes, missing dates, or unexpected charge types. Deeper investigation revealed systemic inaccuracies in tracking invoice statuses, compounded by a recent script that erroneously reverted invoice statuses, requiring manual correction. Concerns were raised about duplication of efforts between DDIS and DSS, with estimates suggesting up to 5% of invoices might be affected. Strategically, the vision to sunset DDIS in favor of DSS remains, but feature parity gaps and operational friction hinder adoption. Resource constraints exacerbate these challenges, with teams stretched thin and priorities shifting multiple times daily. New developers will focus on self-contained DSS/FTK app enhancements (e.g., UI fixes) to free up bandwidth for core research. Communication with leadership will be streamlined to provide high-level updates without oversharing granular details that could derail focus. Finally, administrative follow-ups on timesheets and Boston trip expenses were noted. ## Wins - **Queue Optimization Tools:** Proactive monitoring capabilities and Microsoft-provided optimizations are now available to address OpenAI processing slowdowns. - **Backlog Clarity:** Identified that ~400-700 bills process within 24 hours in the OpenAI queue; the primary backlog (3,000+ invoices) is isolated to DSS pre-audit failures. - **Script Issue Resolution:** Identified and partially reverted (~300 invoices) a script that incorrectly changed invoice statuses. ## Issues - **DSS Pre-Audit Failures:** ~3,000 invoices are stalled in DSS due to validation failures (e.g., missing vendor codes, dates, or incorrect charge types), lacking visibility in existing trackers. - **Data Inaccuracy:** Invoice statuses and counts are unreliable due to undocumented manual interventions, system errors, and potential workflow flaws, complicating root-cause analysis. - **System Duplication & Transition Hurdles:** Work is duplicated between DDIS and DSS for ~5%+ of invoices. DSS lacks critical features present in DDIS, preventing full migration despite being the strategic future state. - **Resource & Priority Challenges:** Teams are overloaded, priorities fluctuate constantly, and key personnel absences (e.g., Shake) delay critical path work like queue analysis. New onboardings add pressure. - **Architectural Complexity:** The fragmented system (DDIS, DSS, UBM) creates opacity, making it difficult to track invoices end-to-end or explain operational dependencies to stakeholders. ## Commitments - **Queue & Data Accuracy:** Deep dive into DSS workflow status inconsistencies and invoice tracking inaccuracies to establish reliable metrics (Faisal). - **DSS Feature Gap:** Plan for targeted user testing (potentially Q1) to identify essential missing features in DSS by forcing limited volume processing through it. - **Leadership Updates:** Provide concise, high-level progress summaries to Abhinav to maintain alignment without unnecessary detail escalation (Faisal, with Sunny support). - **New Dev Allocation:** Direct new developers to specific, low-context DSS/FTK UI improvements to free up senior resources (Faisal). - **Admin Follow-ups:** Resolve Cosmos DB access for error reporting (Faisal/Mer), clarify overtime approval process with Ryan (Faisal), and submit Boston trip expenses (Faisal).
DDAS Workflow Overview
## Workflow System Overview The meeting began with a detailed explanation of the workflow and bill status system. Workflows represent queues where bills are processed, while bill statuses indicate a bill's stage within a queue. Key workflows include image review (statuses: waiting for operators, in process), acquisition automation (largely replaced by DSS), and complete (workflow 13). Bill statuses like "parked" (assigned to a user but inactive), "user go back" (audit-triggered corrections), and "suspended" (indefinite hold) were clarified. DSS IPS (workflow 24) was noted as a newer addition not fully integrated into existing documentation. ## Incorrect Status Updates and Reversion A critical issue involved bills erroneously moved from "complete" (workflow 13) to DSS IPS (workflow 24), which was manually reverted: - **335 bills were corrected** after being shifted incorrectly, with an additional 452 bills reprocessed or manually reset to workflow 13. - **255 bills improperly transitioned** from "complete" to other workflows in the last 24 hours, likely due to an automated script. These require urgent review to determine intent and validity. - The root cause was traced to a Python script that updated bills individually, suggesting non-standard processing that may have bypassed system safeguards. ## System Overlap and Strategic Concerns Significant duplication exists across three parallel systems, creating operational inefficiencies: - **Legacy DDAS** remains active despite plans to decommission it for over five years. - **Web applications** designed to replace DDAS are only partially functional, stalling migration efforts. - **DSS implementation** introduced valuable capabilities but operates alongside outdated systems, fracturing focus. This fragmentation necessitates a strategic realignment in Q1 to consolidate efforts, deprecate redundant systems, and define a unified direction. ## DSS Audit Queue Prioritization Validating the DSS audit queue's accuracy is the immediate priority to prevent redundant work: - **~1,700 bills in "failed DSS audit" status** require verification to confirm whether operations teams manually processed them via DDAS. - **Urgency is heightened** due to limited resource availability, with only one day remaining for key personnel collaboration. - **3,000+ bills in the legacy DDAS audit queue** are secondary until DSS accuracy is confirmed, ensuring engineering efforts address genuinely unprocessed bills. ## Investigation Plan Two parallel tracks were defined to resolve immediate issues: - **Script analysis**: Reverse-engineer the Python script responsible for incorrect status transitions to assess its full impact radius and prevent recurrence. - **DSS audit validation**: Audit the DSS "failed audit" queue (workflow status: DSS IPS) to distinguish between: - Bills needing manual resolution in DSS. - Bills already processed elsewhere that can be ignored. Technical collaboration will focus on these areas before expanding to legacy system queues.
DDAS Manual Processed Invoices
## Issue of Duplicate Invoice Processing A critical problem emerged where invoices already manually processed in DDIS were reappearing in the DSS system for reprocessing, causing redundant work and operational inefficiencies. - **Evidence of reprocessing**: Bills marked as completed a month ago were recently moved back into DSS workflows, indicating systemic failures. - **Impact on operations**: Team members unknowingly reworked completed invoices, wasting resources and delaying genuine tasks. - **Suspected triggers**: Potential causes include automated reprocessing of timed-out bills or manual interventions moving bills from DSS back to production queues. - -- ## Technical Investigation into System Behavior Participants analyzed why completed bills re-entered processing workflows, focusing on integration gaps between DSS and DDIS. - **Workflow status discrepancies**: Both systems share a common database for workflow statuses, but DSS may ignore statuses like "output" (completed) and attempt reprocessing. - **Reprocessing script concerns**: A script designed to retry failed DSS IPS jobs might inadvertently pull already-completed bills if workflow statuses aren't validated. - **Audit trail limitations**: The system lacks detailed logs to trace *why* bills transition from "completed" back to active workflows. - -- ## Immediate Corrective Actions Short-term solutions were prioritized to halt redundant processing and clean up existing queues. - **Query development**: A targeted query identifies bills moved from "completed" status to active workflows (e.g., DSS IPS) for bulk correction. - **Status reset protocol**: Affected bills will be re-marked as "complete" to remove them from production queues immediately. - **Scope expansion**: Checks extend beyond DSS audit queues to "ready to send" stages where duplicate bills might be awaiting reprocessing. - -- ## System Design Flaws and Risks The discussion revealed deeper architectural issues enabling the reprocessing loop. - **Decoupled systems**: DSS operates semi-independently from downstream processes, allowing bills to advance manually in DDIS while DSS retains "failed" flags. - **Timeout complications**: Bills originally failing in DSS due to delays (e.g., system overload) might automatically retry days later, even if manually completed elsewhere. - **Operational risk**: Without synchronization, teams cannot trust queue volumes, risking missed deadlines or client billing errors. - -- ## Long-Term Mitigation Strategies Fundamental fixes were proposed to prevent recurrence by enforcing workflow validation. - **Status-based processing guardrails**: DSS should reject bills not in status "24" (DSS IPS), preventing reprocessing of bills in later stages. - **Integration overhaul**: Real-time synchronization between DSS actions and DDIS workflow statuses is essential to reflect true bill progress. - **Automation safeguards**: Reprocessing scripts must verify workflow statuses before execution to exclude completed bills. - -- ## Next Steps for Resolution The team committed to rapid diagnostics and solution deployment within the day. - **Query refinement**: Focus on bills transitioned out of "completed" status yesterday to test the reprocessing-script hypothesis. - **Cross-validation**: Compare findings against a script used recently to reprocess IPS-failed bills, suspected as a catalyst. - **Collaborative review**: Technical teams will share query results and script logic to pinpoint the exact failure mechanism.
Attendance Update
## Summary The meeting transcript provided does not contain substantive discussion content suitable for summarization. The excerpt consists entirely of small talk regarding participant availability and meeting logistics, which falls outside the scope of this summary per the exclusion criteria. No meaningful topics, decisions, or strategic discussions were captured in the provided transcript segment. The content appears to be an introductory fragment without technical, operational, or business-related dialogue that would warrant topic segmentation or detailed analysis. For a comprehensive summary, please provide a complete transcript containing substantive discussions beyond meeting logistics and participant coordination.
DSS Daily Status
## DSS Audit Queue Analysis and Challenges The meeting centered on addressing a significant backlog of approximately 1,600-1,700 records stuck in the DSS audit queue, which require systematic resolution. Key errors include missing billing start/end dates, vendor code identification failures, invalid line item descriptions, and discrepancies in observation types (e.g., usage vs. charge). The team emphasized understanding the root causes of these failures to disposition records accurately, noting that manual review is time-intensive and unsustainable at scale. ### Error Documentation and Validation Gaps Critical gaps in documentation hinder error resolution, particularly around validation rules and expected behaviors for DSS processing. - **Missing validation criteria**: No clear documentation exists for common errors like "invalid unit manager" or "line item description not valid," forcing reliance on ad hoc investigations. - **Confluence resource limitations**: While Confluence’s DSS overview lists validations (e.g., vendor code checks), it lacks specifics on error-triggering conditions, complicating troubleshooting. - **Data extraction hurdles**: Downloading error details is cumbersome, as the system only allows exporting 100 records at a time, requiring repetitive manual effort for full dataset analysis. ### Workflow Routing and System Logic Confusion persists around routing logic determining whether bills go to DSS audit or DDIS (account setup), impacting backlog management. - **Routing conditions**: Bills are sent to DSS audit for integrity checks (e.g., charge-total mismatches) or manual review needs, while DDIS handles exclusions (e.g., foreign currency, summary bills >6 pages). - **Ambiguous workflow states**: Workflow 11 ("Audit Data") and Workflow 18 ("Audit Account Setup") lack clear definitions, leading to uncertainty about where bills are stuck and why. - **Process gaps**: Bills in DSS haven’t been processed in DDIS, confirming they require net-new attention rather than reprocessing. ### Data Discrepancies and Metric Alignment Disparities between system metrics and actual DSS records obscure the true backlog scale. - **Invoice vs. meter counts**: System status dashboards report ~5,800 invoices, but DSS shows only ~1,600 records due to metric differences-invoices may include multiple meters (e.g., electric supply and distribution as separate line items). - **Age of backlog**: Some invoices have been stuck since June, indicating chronic processing failures that risk obsolescence for older bills. - **Prioritization urgency**: Newer bills are highest priority, but even recent entries lack automated triage capabilities. ### Resolution Strategy and Next Steps The team prioritized clearing the DSS queue through a mix of data analysis and manual intervention, deprioritizing peripheral tasks. - **Export and triage**: An updated SQL query will extract all error data (including client IDs and error details) for CSV analysis, enabling filtering by high-impact clients (e.g., Bill Pay customers). - **Error pattern analysis**: Focus on recurring issues like missing dates or invalid descriptions to identify systemic fixes (e.g., validation rule updates) versus one-off corrections. - **Frontend improvements deferred**: Real-time error highlighting in the UI was deemed low-priority due to implementation complexity, shifting focus to backlog reduction. ### System Limitations and Operational Risks Persistent technical and process constraints threaten backlog management efficiency. - **Reprocessing impractical**: Sending old bills back through DSS is ineffective without validation-rule updates, as errors would likely recur. - **No DDIS reconciliation**: There’s no automated way to verify if bills in DSS were manually processed in DDIS, risking duplicate efforts or data orphans. - **Resource bottlenecks**: Manual review of 1,600+ records is untenable long-term, underscoring the need for upstream fixes in data ingestion or validation logic.
Mantis <> Navigator API Integration
## Prospect The prospect is focused on integrating advanced energy management and sustainability solutions into their platform to enhance customer offerings. They aim to leverage Navigator's APIs for energy efficiency insights, rebate opportunities, utility data management, and carbon accounting. Their role involves driving energy projects by providing actionable data to clients, with an emphasis on scalability and seamless user experiences. The prospect prioritizes technical feasibility, real-time data processing, and minimizing redundant development efforts, particularly around UI integration. ## Company The company operates a platform (referred to as "Mantis" or "Perform") that serves clients in energy management and facility optimization. They manage diverse portfolios, from small sites to large-scale enterprises, and seek to embed Navigator's capabilities directly into their ecosystem. Key offerings include: - **Energy Efficiency Insights**: Using predictive modeling for building energy usage and improvement opportunities. - **Rebate Optimization**: Identifying and applying utility incentives for sustainability projects. - **Utility Bill Management (UBM)**: Handling utility data aggregation, validation, and analytics. - **Carbon Accounting**: Providing emissions tracking and reporting for compliance or voluntary disclosures. The company emphasizes flexibility, supporting clients at any stage of their sustainability journey-whether starting with basic address-based insights (Tier 1) or advancing to detailed, equipment-specific analyses (Tier 2). ## Priorities 1. **API Integration for Glimpse and Rebates**: - Combine energy efficiency (Glimpse) and rebate insights into a single API call to streamline data retrieval. - Use Tier 1 (address-only) calls for high-volume prospecting and Tier 2 (detailed building data) for targeted project planning. - Ensure real-time API responsiveness to enable immediate customer-facing insights. 2. **Embedded User Experience**: - Integrate Navigator’s functionalities (e.g., UBM dashboards, carbon reports) within their platform without requiring users to switch interfaces. - Explore SSO (Single Sign-On) and white-labeling options to maintain brand consistency. 3. **Carbon Accounting Expansion**: - Incorporate emissions data (CO2e) into customer reports, leveraging both Glimpse predictions and dedicated carbon accounting APIs. - Support tiered disclosure needs-from basic emissions estimates to granular reporting for regulatory compliance. 4. **Scalable Data Handling**: - Process large portfolios efficiently, using Tier 1 for broad initial scans and Tier 2 for deeper dives on qualified prospects. - Map legacy data models to Navigator’s API parameters to minimize onboarding friction. 5. **Rebate Execution Synergy**: - Align Glimpse-generated rebate insights with Navigator’s rebate team for end-to-end project support, from discovery to incentive claims. 6. **Technical Collaboration**: - Clarify credit structures for API calls (e.g., unlimited Tier 1 vs. metered Tier 2 usage). - Address UBM onboarding complexities, such as utility credential management and data synchronization.
IT Setup Follow-up
## Summary The transcript contains extremely limited substantive content, making comprehensive summarization challenging. The available material consists of a single fragmented statement that appears to reference technical or operational matters, though specific context is absent. ### Technical Task Status A brief reference was made to progress on a technical task. - **Opening equipment clearance**: Mention of having "cleared the opening equipment" suggests prior completion of a preparatory step for an unspecified system or process. - **Ongoing technical work**: The statement "we need to work on the it" indicates pending technical work, though the specific nature of "it" remains undefined. - **Timeline request**: A request was made for additional time to complete this work ("Give me some time to work it"). ### Unclear References Several ambiguous terms were mentioned without sufficient context for interpretation. - **"DDIs"**: The phrase "It's set in DDIs" suggests a configuration or status within a system referred to as DDIs, potentially referring to Direct Dial-In numbers in telephony or another domain-specific acronym. - **"Beaver" reference**: The isolated phrase "Just had a beaver" lacks any contextual explanation, making its relevance or meaning indeterminable from the transcript. ### Communication Oversight An admission of a communication lapse was noted. - **Unresponded message**: The opening exclamation "Oh, my God. I haven't responded to him" explicitly acknowledges a failure to reply to an unidentified individual ("him"), implying an outstanding communication obligation. ### Overall Context Gaps The transcript provides insufficient detail to determine the meeting's purpose, participants, or outcomes. - **No discernible agenda**: Core topics, decisions, or strategic discussions are entirely absent from the provided excerpt. - **Critical information missing**: Key elements such as project names, objectives, responsibilities, technical specifications, or business context are not present. - **Actionable insights unavailable**: Without substantive discussion points, no conclusions, action items, or strategic directions can be inferred.
UBM Project Group
## Project Tracking Framework A new tracking system was implemented using a Gantt chart and spreadsheet to monitor cross-team initiatives through year-end. Key elements include: - **Color-coded ownership**: Purple indicates Tara's team responsibilities, yellow denotes UBM-handled items, and blue marks Tim's group tasks, enabling clear visualization of workload distribution. - **Monthly prioritization**: Work is segmented into October, November, and December buckets to ensure focused progress tracking, with weekly updates to be added to the spreadsheet. - **Progress visualization**: Completed items will turn green on the Gantt chart, while a supplementary spreadsheet details each initiative's current challenge, proposed solution, timeline, and notes for leadership reporting. ## Customer Issue Management System Implementation of a HubSpot-based ticketing system is underway in phases: - **Phase 1 (October)**: Framework and form creation are complete. CSMs will gain license access next week to submit tickets internally, with notifications routed to development teams. - **Phase 2 (November)**: Enabling customer-initiated tickets via platform integration, pending clarification on link placement and technical requirements. - **Phase 3 (Future)**: Jira integration deferred to Q1 next year due to development complexity and current workload constraints. Access requests for HubSpot Service Desk were noted for resolution. ## Reporting & Analytics Updates Several reporting initiatives were reviewed with adjusted timelines: - **Invoice Processing Report**: Development completed by Sunny, ready for deployment by month-end. - **Activity Downloads Report**: Requirements finalized but delayed to early November due to data audit priorities. - **Power BI Self-Service**: Past due amount report already operational. Additional self-service mapping capabilities confirmed as live in UBM. - **Onboarding Pipeline Report**: Scheduled for November development start after current issue resolution. ## Data Integrity Initiative A multi-phase approach to address system errors includes: - **Error Identification (October)**: Targeting completion by month-end, though 2-3 items may spill into early November due to resource availability. - **Documentation & Fixes (November)**: Comprehensive error resolution planned, with potential extension to December if complexities arise. - **Interpreter Layer (December)**: Development of translation logic between systems to address unresolvable data service issues, considered distinct from immediate fixes. ## Billing Rule Optimization Review of payment processing rules has commenced ahead of schedule: - **Rule Relaxation**: Analysis identifies specific rules causing bill pay failures, with adjustments targeted for November implementation. - **Error Review Synergy**: Findings from the data integrity initiative will directly inform rule modifications to reduce transaction failures. ## Future Initiatives Longer-term items were cataloged for later prioritization: - Tim's team contributions will be added to the tracker, focusing on December or later timelines. - Leadership updates will highlight October progress first, with November/December deliverables presented sequentially in subsequent reviews.
CDS/DSS - Daily Stand Up 2025
## Summary ### ADO DDAS Project Access Access provisioning for the ADO DDAS project was prioritized to enable team functionality. - **Access scope clarification**: Focused on granting permissions beyond Agile boards to include repositories and other essential resources, with urgency due to team members returning the next day. - **Implementation plan**: Confirmed capability to provide access to boards and repos, addressing potential limitations in user permissions. ### DDS Connection Issues An investigation into intermittent VM connectivity problems affecting invoice processing efficiency was discussed. - **Root cause analysis**: Identified three potential causes-user-side network latency, insufficient VM resources for new connections, or Microsoft's connection balancing algorithm limitations. - **Diagnostic measures**: Deployed profiling tools to capture event logs (estimated at 20GB) for analysis, potentially using LLMs for processing. Urgency emphasized due to impact on processing 3,000+ invoices. - **Operator capacity planning**: Highlighted the critical need to define the maximum number of concurrent DDS operators to optimize VM allocation (currently 12 VMs supporting 36 connections). ### DSS Audit Workflow Updates Modifications to the DSS audit system aimed to streamline invoice tracking and reporting. - **Workflow consolidation**: Unified 17 disparate workflows under a single DSS audit category in Power BI, improving clarity in invoice status tracking (e.g., "DSS IPS" replacing fragmented labels like "Data Audit"). - **Data reconciliation**: Addressed discrepancies between the new report (showing 4,100 invoices) and the legacy system status report (showing ~3,000), noting potential delays in data synchronization. ### DSS UI Enhancements User interface improvements were proposed to handle high-volume invoice error management. - **Error visualization**: Requested highlighted line items for failed/data audit tickets to accelerate manual review of 3,800+ invoices. - **Automation potential**: Explored systematically pulling all errors into a centralized view to avoid manual processing, with multi-select functionality considered for batch operations. ### FTG Connect Access VPN configuration for FTG Connect access was resolved to ensure operational readiness. - **Access resolution**: Required Azure VPN client setup using a specific XML profile for secure connectivity, addressing initial access failures. - **Contingency planning**: Emphasized the need for reliable access to troubleshoot future issues promptly. ### System Performance Monitoring Ongoing efforts to stabilize systems and preempt disruptions were underscored. - **Proactive logging**: Committed to analyzing VM event logs to preempt connection failures, particularly for resource-intensive processes like FDG Connect. - **Resource allocation**: Stressed balancing VM resources against operator demand to maintain system stability during peak usage.
Audit Errors Alignment
## Vendor Exclusion Implementation The team discussed implementing vendor exclusions in DSS to prevent frequent processing bottlenecks. - **Exclusion status**: Currently, only client exclusions are functional, with vendor exclusions pending implementation despite being a high-priority need. - **Technical approach**: Leveraging the existing account exclusion feature will simplify vendor exclusion development, with new engineering resources expected to accelerate this next week. - **Urgent cases**: Specific refuse vendors were highlighted as critical exclusions due to consistent processing failures requiring manual intervention. ## Data Audit Backlog Management Approximately 3,800 invoices are stalled in pre-prep audit, requiring urgent processing strategies. - **Composition analysis**: The backlog includes duplicates, account setup items, and meter-level discrepancies, with half concentrated in audit queues. - **Prioritization challenge**: New client onboarding demands conflict with backlog clearance, necessitating efficient triage methods. - **Data integrity checks**: Instances of invoices marked "ready to send" despite already being processed were noted, complicating workflow tracking. ## Service Zip Code Handling Protocol A decision was made to enhance service zip code logic for payment processing reliability. - **Fallback mechanism**: When service zips are missing from bills, location zips will be used as defaults to support payment portal requirements. - **Downstream impacts**: This addresses critical needs for guest payment portals while acknowledging that credential-based payments remain unaffected. - **Mapping logic**: The solution avoids disrupting UBM location mapping screens by maintaining consistent data sourcing. ## Remittance Address Resolution Systemic gaps in remittance address handling were identified, prompting process changes. - **Vendor data synchronization**: Discrepancies emerged between DSS vendor setups and UBM bill data, particularly for entities like City of Tulsa Utilities where addresses failed to propagate. - **Automation fix**: UBM will now pull remittance details directly from vendor records when bill data is missing, ensuring payment accuracy. - **Validation gap**: No current checks ensure vendor information consistency between systems, creating potential payment failures. ## Error Code Analysis and Handling Key error codes were reviewed to optimize audit workflows and downstream impacts. ### Error 2021 (Missing Bill Information) - **Audit approach**: This non-blocking error for missing Bill 2 data will remain in audit queues for potential outreach team action. - **Value assessment**: Bill 2 data primarily aids address validation for client outreach rather than being essential for payment processing. ### Error 2023 (Late Fee Identification) - **Current handling**: Auto-fix protocols will continue, but notifications may be added if clients like Banion or Gopuff require late fee alerts. - **Data capture issue**: Late fees often get bundled into previous balances during extraction, obscuring visibility. ### Error 2025 (Estimated Charges) - **Audit retention**: Flagging estimated charges stays valuable for clients like Sheets despite limited operational use. ### Error 2027 (Unit of Measure Mismatches) - **Problem patterns**: Recurring issues include power factor percentages mislabeled as kW, "lamps" line items miscategorized, and zero-usage bills lacking units. - **Resolution tactics**: Temporary manual overrides are used (e.g., forcing CCF for water bills), but systemic fixes require DSS extraction improvements. - **Volume insight**: Load/power factor errors dominate this category, with fewer than 10 other complex cases pending. ## Vendor Management System Concerns Critical vulnerabilities in vendor data flows between systems were highlighted. - **Sync limitations**: Vendor updates in DSS don’t auto-populate remittance details in UBM bills, causing payment risks. - **Process dependency**: Manual vendor confirmation during enrollment is currently essential since DSS lacks accuracy for automated vendor mapping. - **Risk example**: Mismatched vendor names (e.g., abbreviations vs. legal names) could disrupt payment workflows if unaddressed. ## Special Billing Scenarios Edge cases like historical uploads and unique line items require tailored handling. - **URA client complexities**: Excel-based uploads frequently contain unit-of-measure errors needing manual correction. - **"Lamps" line items**: These uncommon charges often get reclassified as cost items to bypass generation charge validation failures. - **Zero-usage bills**: When consumption data is missing (e.g., Aspen examples), historical patterns or manual CCF assignments are used as workarounds.
DSS Alignment
## Data Audit Queue Analysis The meeting focused on resolving discrepancies in audit queue volumes and processing capabilities. Current estimates indicate approximately 5,700 meters in the DDS data audit queue, with a weekly processing target of 5,000-6,000 meters achievable under optimal resource allocation. Key details include: - **Resource-dependent throughput**: Each processor handles ~250 meters daily, but backlog clearance fluctuates based on staffing. - **Queue prioritization**: Recent efforts focused on clearing prepay queues before addressing post-pay volumes. - -- ## System Data Discrepancies Significant inconsistencies emerged between system reports, complicating backlog assessment: - **DSS report conflicts**: The System Status Report showed 3,107 bills in DSS (representing bill records, not meters), while Power BI indicated 5,700 meters in audit queues. - **Data interpretation challenges**: Bill records in DSS don’t directly map to meter counts, as one bill may encompass multiple meters. For example, a single bill for "Altus Management" contained 30 meters, but only one required data audit. - **Resolution strategy**: Urgent reconciliation needed between DSS production data and audit reports to align metrics. - -- ## UBM System Bottlenecks Critical errors in the Utility Bill Management (UBM) system are causing operational delays: - **Technical failures**: - *Partial batch restrictions*: UBM incorrectly blocks partial energy cap batches despite no policy requiring this. - *Corrupted PDFs*: Broken files in "ready to send" status halt entire workflows. - **Customer impact**: Invoices stuck in UBM prevent customer visibility, leading to complaints about perceived invoice retrieval failures. - **Mitigation focus**: Immediate troubleshooting targets partial batches and file integrity issues to unblock the "ready to send" pipeline. - -- ## Support Coverage and Workflow Gaps Time zone limitations and system dependencies exacerbate operational risks: - **Resource gaps**: Lack of Eastern Time coverage for critical systems (e.g., DSS, UBM) during outages disrupts issue resolution. - **Documentation need**: Procedures for common fixes require documentation to enable cross-time-zone support. - **Customer pressure**: Stakeholders like Victor’s CFO demand credential updates, but system inaccessibility (e.g., Smartsheet outages) impedes responsiveness. - -- ## Strategic Next Steps Two parallel workstreams were defined to address backlogs and system stability: - **DSS audit cleanup**: - Manual review of incomplete records (e.g., missing dates) will begin, with complex cases escalated. - Systematic fixes will target recurring data gaps in IPS (Invoice Processing System) to prevent future manual interventions. - **UBM error triage**: - Dedicated sessions scheduled to diagnose UBM errors, including partial batches and file corruption. - Goal: Achieve consistent "ready to send" workflow automation to prevent recurring bottlenecks.
User Activity Report Requirements
## Summary ### System Transition and Technical Issues The meeting addressed ongoing challenges with the legacy DDS system, which is causing operational disruptions due to its age and fragility. Participants highlighted that even minor issues can lead to hours of troubleshooting, emphasizing the urgency of transitioning to a modern solution. A protocol error preventing system access was cited as a recent example, reinforcing the need for architectural changes. ### User Activity Report Enhancements Key requirements for the user activity report were finalized to improve performance tracking and data accuracy: - **Track uploads via FTG Connect and FTP**: Currently, manual tracking is required for files uploaded through these channels, leading to underreported productivity metrics. The solution must count individual files within ZIP archives (e.g., 300 files in a ZIP should register as 300 uploads). - **Resolve blank "snooze accounts"**: Discrepancies between offshore team records and system data must be fixed to ensure accurate attribution of snoozed tasks. - **Filter active employees only**: The report dropdown must exclude inactive users to streamline data analysis. - **Multi-select functionality**: Enable selection of multiple vendors/users simultaneously for efficient reporting. ### Outstanding Tasks Client View Report Updates Critical modifications were defined for the Outstanding Tasks report to improve task management: - **Add "task created" date**: This will help prioritize tasks based on urgency (e.g., identifying tasks stagnant for 5+ days). - **Include snooze notes and frequency**: Operators need to log reasons for snoozing (e.g., "account closed but historical data required") to reduce redundant efforts. - **Future due date integration**: Plans to incorporate vendor-level payment due dates for proactive follow-ups were noted as a longer-term goal. ### DSS Audit Queue Crisis A backlog of ~4,000 bills stuck in the DSS audit queue was identified as a high-priority risk: - **Data reconciliation urgency**: Bills may be duplicated or incorrectly flagged if manually processed in DDIS but left in DSS, requiring cross-system validation. - **UI improvements for triage**: The DSS app will add filters for workflow status (e.g., "DSS IPS," "Audit Data") and highlight error fields (e.g., missing "billing start date") to accelerate resolution. - **Error analysis**: Engineers will categorize recurring failures (e.g., date-related vs. vendor-specific issues) to identify systemic fixes. ### Strategic Next Steps Immediate actions include updating the DSS report to exclude processed bills, refining the app’s filtering capabilities, and auditing the 4,000-bill backlog. Longer-term priorities involve rebuilding reports for scalability and integrating snooze-note functionality into FTG Connect. Performance tracking enhancements (e.g., upload credits) will be fast-tracked to prevent productivity misrepresentation.
DSS Audit Workflow
## DSS Audit Queue Issues A significant backlog of approximately 4,000 failed items was identified in the DSS audit queue, requiring intervention to prevent stagnation. These failures stem from issues like unsupported units of measure, missing build dates, or pre-check errors. Key concerns include: - **Ownership ambiguity**: Uncertainty exists over whether the team should own resolution or route items to a data-oriented queue (e.g., IQ), as failures indicate potential gaps in predefined DSS criteria. - **System disconnect**: Lack of synchronization between DSS and external systems (e.g., audit workflows) complicates tracking, necessitating a unified source of truth. - **Operational impact**: The volume risks becoming unmanageable without proactive monitoring and resolution protocols. ## Retry Mechanism Implementation A scheduled job for reprocessing failed invoices was discussed to replace manual interventions, previously halted due to system overload. Critical details: - **Automation design**: The job would run nightly (e.g., 1-2 AM) to retry failures via OpenAI’s IPS processing, with batch size adjustments to balance efficiency and system load. - **Current limitations**: Manual reruns remain a stopgap solution until automation is reactivated. ## Reporting and Dashboard Enhancements Power BI reports require urgent usability improvements to monitor queues effectively. Focus areas include: - **Filtering system overhaul**: Adding multi-select options (e.g., status types beyond "failed") and persistent sorting across pages to navigate large datasets (e.g., 12,000+ items). - **New metrics**: Introducing a "failed queue" column to distinguish between genuine failures and items awaiting processing. - **Accessibility**: Enabling admin permissions for independent report refreshes via semantic models in Power BI. ## OpenAI Batch Processing Optimization Feedback highlighted inefficiencies in how batches are sent to OpenAI, causing delays. Proposed adjustments: - **Batch size increase**: Scaling from 5 to 10 rows per file to reduce overhead, as smaller files prolong processing times. - **Implementation review**: Investigating code-level changes to align with OpenAI’s recommended batching practices for performance gains. ## Historical Data and Cost Tracking The need for historical queue analytics emerged to contextualize weekly performance. Capabilities discussed: - **Date-range filtering**: Assessing metrics like processed/failed invoices and costs (e.g., IPS/OpenAI charges) over specific periods. - **Trend analysis**: Using table views to track volume fluctuations (e.g., 5,000 processed vs. 7,000 pending in a week). ## Pending System Adjustments Minor but critical fixes were flagged for follow-up: - **Data corrections**: Addressing automation errors for natural gas billing and handling total charges. - **Credit workflows**: Prioritizing credit-related updates after resolving immediate queue issues.
DSS Check-in
## OpenAI Batch Queue Status and Processing Approximately 1,700 items are currently pending in the OpenAI batch queue, with over 1,000 in a failed state. These failures primarily occur during high-volume processing windows (5-10 days per finance cycle) due to request overload. Failed items remain stuck until manually reprocessed, creating a recurring backlog that requires intervention. ## Failure Analysis and Root Causes The system experiences failures when excessive concurrent requests overwhelm capacity, causing transactions to abort. Key observations include: - **Volume sensitivity**: System stability degrades during peak processing periods, leading to transaction drops. - **Retry limitations**: An automated retry mechanism was previously disabled because it exacerbated load on the DSS, worsening performance issues. - **Manual dependency**: Current resolution requires direct database intervention (DBI) or custom scripts to reset failed items. ## Retry Process Optimization A nightly automated retry script will be implemented to clear failed queues systematically. Key decisions: - **Execution timing**: The script will run at 12 AM daily to minimize interference with peak operational hours. - **Automation benefit**: Scheduled processing eliminates manual intervention needs and prevents backlog accumulation. - **Ticket alignment**: Work items will be formalized through ticketing to track implementation progress. ## Audit Workflow Distinctions Two audit systems operate with different failure-handling behaviors: - **Data Audit workflow**: Managed by Ops Afton, it processes builds requiring review. - **DSS audit**: Features a pre-audit stage where failures leave builds in a "failed DSS audit queue" status without escalating to the main audit workflow. Failures frequently stem from multiple pre-audit cues conflicting or timing out. ## Next Steps and System Review Immediate priorities include automating the retry script and analyzing DSS audit failure patterns. The abrupt call conclusion indicated unresolved questions about DSS failure root causes, warranting deeper investigation into pre-audit cue management.
DSS Check-in
## Summary ### Queue Status Investigation The discussion focused on determining the current number of items in various processing queues within the DSS system. Key findings include: - **Real-time queue metrics are unavailable**: The existing Power BI report only shows historical data based on *creation dates*, not current queue status. For example: - Filtering to "Today" showed ~500 items, while "This Month" showed 13,000, indicating the data reflects creation timelines rather than active backlog. - **Ambiguous status definitions**: Confusion exists around what "In Progress" signifies-whether items are awaiting OpenAI responses or stuck in DSS workflow transitions. ### Report Limitations The current reporting mechanism was identified as inadequate for operational monitoring: - **Manual refresh requirement**: The Power BI report doesn’t update in real-time; manual refreshes are needed hourly. - **Misaligned categorization**: Statuses like "Audit Queue" and "Failure Queue" don’t clearly distinguish between: - Items pending external processing (e.g., OpenAI API responses). - Items routed internally (e.g., to Data Audit teams) but stalled in DSS workflows. - **Stakeholder impact**: Inaccurate data is causing miscommunication with stakeholders (e.g., Terra reporting 3,000-5,000 queued items without validation). ### Action Plan for Reporting Improvements Immediate steps were defined to address visibility gaps: - **Develop a real-time queue dashboard**: A new report will be created to show *current* counts per queue state (Audit, Failure, In-Progress), with clear definitions for each status. - **Prioritize urgency**: This task takes precedence over other feature work, as queue metrics are critical for operational decisions and stakeholder alignment. - **Collaborative review**: The new report will be validated in a dedicated session before further development tasks are discussed. ### Context on Operational Concerns Underlying issues driving the analysis were clarified: - **Unresolved processing delays**: Alarms triggered on Friday indicated potential queue backlogs, but existing tools couldn’t verify the scale. - **Stakeholder pressure**: Abhinav raised concerns about low processing volumes (e.g., "only 10 invoices processed"), highlighting the need for accurate data to address performance questions.
DSS-UBM Alignment
## Technical Challenges and System Outages Recent technical disruptions significantly impacted operations, primarily due to an AWS outage that caused cascading failures across services. Internet instability and mobile network issues further hampered productivity, affecting critical tools like Confluence and Zoom. These disruptions highlighted infrastructure vulnerabilities, particularly regarding Amazon's cost-cutting measures discussed prior to the incident. Performance degradation was noted across multiple platforms, with teams struggling to maintain workflow continuity during peak hours. ## DSS Team Capacity and Resource Constraints Resource limitations within the DSS team are creating bottlenecks, with only one current engineer handling escalating demands. Key points include: - **Temporary carbon accounting engineers**: Being onboarded to alleviate pressure, though their ramp-up time will delay immediate productivity gains. - **Prioritization challenges**: The growing backlog of tasks requires triage, with critical path items competing for limited bandwidth. - **Holiday season impact**: Mid-November onward will see reduced capacity due to vacations, making October/early November the only viable window for complex deliverables. ## Bill Validation Rules Strategy A methodical approach is being implemented to review and reactivate UBM/DSS bill validation rules: - **Phased re-enablement**: Starting with existing active rules before addressing previously relaxed validations, focusing first on integrity checks and AP-related rules. - **Collaborative troubleshooting**: Joint sessions identify root causes for errors, splitting fixes between UBM and DSS based on system ownership. - **Rule deprecation**: Certain validations like *control code* requirements are being permanently retired as they no longer align with strategic needs and aren't triggering current errors. ## UBM System Limitations and Architectural Impacts Proposed changes to UBM's bill validation logic face significant technical hurdles: - **Architectural conflicts**: Introducing bill edits during reparse operations contradicts original design assumptions, risking performance degradation in bulk processing. - **Multiple update points**: Changes must propagate consistently across bill ingestion, edit, and reparse workflows, increasing implementation complexity. - **Timeline constraints**: Even minor UBM modifications are unlikely before December due to competing AP commitments and onboarding priorities. ## Knowledge Sharing and Cross-Team Collaboration Mitigating single points of failure is critical, especially with key personnel availability uncertainties: - **Ruben's pending absence**: His upcoming surgery creates knowledge transfer urgency, requiring work redistribution across the team. - **Cross-functional pairing**: Alex Nikish is designated as the primary UBM liaison for DSS (specifically Shake), continuing previous JSON collaboration work. - **Documentation gap**: Ruben lacks bandwidth to script routine tasks for new data engineers, slowing onboarding despite hiring progress. ## Near-Term Action Plan Immediate focus areas were established to maintain momentum: - **Validation workshops**: Daily sessions scheduled to systematically address error resolutions, leveraging available bandwidth before holiday disruptions. - **CSV-to-JSON transition**: Coordinating with Matthew (recently returned) and Shake to test JSON ingestion paths and decommission CSV dependencies. - **Error ticket creation**: Finalizing documentation for identified issues once current SSQ-related fires are resolved.
Teams Review
OpenAI Monitoring
OpenAI Monitoring
## Summary ### Queue System Clarification The discussion clarified the relationship between internal and external processing queues. - **DSSQ vs. OpenAIQ distinction**: Confirmed these are separate queues, with DSSQ holding items pending OpenAI responses. - **Queue dependency**: Items remain in DSSQ *only* because they’re awaiting OpenAI’s processing (OpenAIQ), not due to internal bottlenecks. ### Current Processing Status All pending items have been dispatched to OpenAI, but responses are delayed. - **Submission completeness**: All 3,000 items were successfully sent to OpenAI, yet no responses have been received. - **External dependency**: The delay is solely attributed to OpenAI’s processing latency, not internal batching or submission mechanisms. ### OpenAI Processing Dynamics OpenAI’s throughput limitations directly impact workflow velocity. - **Standard throughput rate**: OpenAI typically processes **~10 items every 5 minutes** under normal conditions. - **Queue congestion risk**: Large backlogs (e.g., 5,100 items) can degrade OpenAI’s processing speed further due to system overload. ### Backlog Scale and Implications A significant backlog exists, highlighting external constraints. - **Current backlog volume**: **5,100 items** are pending in DSSQ, all awaiting OpenAI responses. - **Root cause confirmation**: This bottleneck is definitively an **OpenAI-side limitation**, not an issue with internal build processing or queue management. ### Resolution Path and Communication Next steps focus on transparency and stakeholder alignment. - **Issue documentation**: The OpenAI bottleneck will be formally documented and shared with the team to clarify misconceptions. - **Collaboration readiness**: Further discussions are planned within 30 minutes to address follow-up questions, potentially involving direct consultation.
Error Handling Refinement
## Service Address Zip Code Handling The meeting extensively discussed error 2018, triggered when service address zip codes are missing in CSV data. This field is critical for location mapping in UBM and payment processing via PayClearly, but its absence on many invoices (e.g., remit info lacking zip) complicates automated capture. Key points included: - **Short-term solution**: Maintain relaxed validation to avoid blocking invoice processing, as operations currently skip manual zip code entry when missing. - **PayClearly dependency**: Zip codes are essential only for credit card/guest portal payments (not ACH), but utilities requiring this vary-emergency payments add complexity. - **Long-term approach**: Differentiate requirements by customer type (bill-pay vs. AP/postpaid) and collaborate with Tim to identify utilities mandating zip codes for PayClearly. - **Mapping impact**: Without zips, location-matching relies on billing account numbers or manual searches, risking delays; adding state fields was proposed but deemed impractical without consistent invoice data. ## Due Date and Statement Date Resolution Error 2019, caused by missing due dates or statement dates, was addressed by implementing automated fallbacks: - **Due date handling**: Default missing due dates to the invoice date (implying immediate payment), avoiding errors for bills marked "upon receipt" or past due. - **Statement date logic**: Use period-end dates if statements are absent, as these dates are functionally equivalent for processing. - **Control code removal**: The legacy "control code" field (from FDG systems) was eliminated from validation-it’s unused downstream and retained only for internal reference. - **Critical fields preserved**: Client account, vendor code, and total amount checks remain enforced to prevent data gaps. ## Summary Row Validation Adjustments Error 2020 validations for summary-row fields were refined to balance automation and manual intervention: - **Service address gaps**: If missing, pull from previous bills in the same chain when account/utility/location match; otherwise, flag for manual review. - **Meter serial challenges**: Retain as a required field due to frequent DSS capture failures (e.g., Constellation invoices); utilities like water/sewer often lack meters, necessitating ops-team fixes. - **Account code criticality**: Enforce strict validation since mismatches disrupt customer-utility alignment. - **Bill type defaults**: Set missing types to "Full Service" for non-core utilities (e.g., refuse/water), reducing mapping errors. ## Days of Service Calculation Logic Inconsistencies in "days of service" values were resolved by deriving them from start/end dates: - **Automated recalculation**: Compute days as *(end date - start date)* when values are missing or illogical, leveraging reliably captured period dates. - **Overlap handling**: System flags date overlaps but self-corrects in subsequent bills if dates stabilize. ## Data Sourcing and System Improvements Broader strategies emerged for enhancing data reliability and reducing manual work: - **DSS limitations**: Utilities like Constellation pose extraction challenges (e.g., flipped account numbers, dense line items), requiring targeted parser improvements. - **Service address sourcing**: When invoices omit addresses, use onboarding sheets or historical bills-but avoid retroactive cleanup due to scale. - **Virtual account optimization**: Leverage billing account numbers for location matching when zips are absent, supplementing with fuzzy-match AI later. - **Field prioritization**: Remove non-essential validations (e.g., control code) while preserving core fields like vendor code and total amount. ## Follow-Up Planning The team committed to deeper dives into unresolved issues: - **Error backlog**: Schedule dedicated sessions for remaining errors (e.g., 2017, 2021+), repurposing existing meetings. - **PayClearly alignment**: Confirm with Tim which utilities mandate zip codes and whether requirements apply only to bill-pay customers. - **UBM database review**: Audit field utilization to eliminate obsolete checks and align validations with downstream needs. - **Victor account cleanup**: Address systemic data gaps (e.g., unmapped locations) separately due to exceptional complexity.
Invoice Parsing Plan
## Summary The discussion focused on ongoing work with invoice processing systems and upcoming team activities. Key topics included challenges in handling invoice subtotals and data breakdowns within the current OCR-based workflow, where the system duplicates charges by capturing both detailed line items and overview totals. A question was raised about bypassing Azure Document Intelligence extraction to send raw invoice PDFs directly to O3, leveraging its vision capabilities for better contextual understanding-though compliance or cost constraints remain unclear and require further validation. Upcoming priorities include continuing work on BP credit implementation and addressing the subtotal discrepancies. Two critical meetings were highlighted: a session to review prioritized UBM errors (with outcomes documented for later review) and a requirements-gathering meeting with Afton about a new user activity report. Additionally, plans to integrate Gov into R&D efforts next week were confirmed, involving guidance on specific tickets, hands-on contributions, and onboarding support for new engineers. ## Wins *None explicitly mentioned.* ## Issues - **Invoice Data Processing**: Persistent challenges in distinguishing subtotals and detailed charge sections during OCR extraction, causing duplication of data (e.g., capturing both itemized charges and summary totals). - **Workflow Efficiency**: Unclear rationale for not sending original invoice PDFs directly to O3 despite its vision capabilities, potentially missing opportunities to improve contextual accuracy. ## Commitments - **Attendance at User Activity Session**: Team member to join Monday’s meeting with Afton to define requirements for the new user activity report. - **R&D Collaboration**: Team to involve Gov starting next week for ticket guidance, direct contributions, and onboarding support for new engineers.
DSS Overview
## Overall DSM Process Flow The meeting detailed a comprehensive document processing system (DSM) for invoices, starting with PDF uploads via an API and progressing through automated and manual validation stages. Key components include batch processing, duplicate checks, and integration with Azure services for data extraction and transformation. ### Initial Processing and Duplicate Detection - **PDF ingestion and batch creation**: Invoices are uploaded to a Python-based API, triggering the creation of document batches tracked via Microsoft Service Bus queues. - **Duplicate handling**: The system performs real-time duplication checks against existing records; flagged duplicates are routed to a manual resolution interface (DSS Connect) for operator review. - **Status-driven workflow**: Each processing stage updates invoice status (e.g., "pending," "processed"), enabling error recovery by restarting queues from the last known status. ### Data Extraction with Azure and OpenAI - **OCR via Azure Document Intelligence**: Extracts text and metadata (e.g., page count) from invoices using pre-built models, with a 6-page limit per document for optimal processing. - **LLM-driven categorization**: OpenAI’s GPT-3.5 processes OCR output to classify line items into three observation types: - *Usage*: Metrics like kilowatt-hours without associated costs. - *Charge*: Fees (e.g., late penalties) without usage metrics. - *Usage Charge*: Combined usage and cost data (e.g., energy consumption billed per unit). - **Batch optimization**: Requests are sent asynchronously to Azure OpenAI, with responses typically returned within 24 hours. Token limits (≈250K) accommodate most invoices without issues. ### Data Structuring and Mapping - **JSON transformation**: LLM output is restructured into a flat JSON format, separating non-block items (e.g., client details) from block items (e.g., meter-specific charges). - **Static CSV mappings**: Predefined CSV files standardize field names (e.g., renaming "utility account number" to "service location") and validate extracted data against expected formats. - **Historical data refinement**: For recurring accounts, historical patterns (e.g., past observation types, commodity codes) refine LLM outputs, auto-populating missing fields like charge descriptions or codes. ### Validation and Post-Processing - **Pre-audit checks**: DSS performs commodity-specific validations, such as ensuring natural gas invoices exclude invalid line items (e.g., "generation charges"). - **Currency and duplication safeguards**: Foreign currencies are rejected, and usage entries are scanned for double-counting before conversion to USD. - **Template matching**: Extracted account/meter details are matched against SQL database records; matches trigger automatic data enrichment from existing templates (e.g., vendor IDs, service locations). ### Testing and Deployment Protocols - **Staged validation**: Changes are tested locally first, then deployed to test environments for operator feedback before production rollout. - **Integrity verification**: Final checks ensure line-item charges sum to invoice totals, flagging discrepancies for manual review.
Teams Review
## Document Location and Access Clarification The core discussion revolved around locating and accessing a specific document that was previously shared. ### Inquiry Regarding Shared Materials The conversation began with a direct question about whether any documents had been sent outside of a previously established communication channel. - **Confirmation of sharing method:** It was clarified that no additional documents were sent beyond those already shared in a specific Teams chat associated with a prior meeting, countering the initial assumption that materials might have been sent via email or another separate method. ### Resolution of Access Confusion The confusion stemmed from where the document was located, leading to a screen-sharing attempt for verification. - **Source of misunderstanding identified:** The issue was resolved when it became clear the document was shared exclusively within a Microsoft Teams meeting chat, not via email or a general Teams channel. The individual seeking the document had been searching in the wrong locations (email and general Teams areas), unaware the file resided specifically in the chat history of that particular meeting. - **Plan for follow-up:** After confirming the location (the Teams meeting chat), the individual acknowledged the oversight and stated they would retrieve the document from there and respond later.
October Workstream Alignment
## Project Tracking Framework Development A new tracking framework was created to manage project deliverables across teams and timelines. This system organizes work items by month with color-coded ownership indicators and includes fields for current challenges, proposed solutions, dependencies, and weekly progress updates. Key features include: - **Monthly segmentation**: Work items grouped by target completion month (October, November, etc.) with visual dividers. - **Progress tracking**: Weekly updates to be logged directly in the framework, with completed items marked in green. - **Ownership visualization**: Color-coding system (yellow for Terra's involvement in Faizal's items, purple for Faizal's involvement in Terra's items, blue for Tim's group) clarifies cross-team responsibilities. - **Adaptive scheduling**: Flexibility to move items between months based on feasibility discussions during weekly check-ins. ## October Priority Alignment The team reviewed and refined October deliverables to ensure realistic commitments, focusing on high-impact items requiring cross-functional coordination. Critical adjustments include: - **Customer issue resolution process**: Faizal requested inclusion in phase 1 planning to ensure alignment with future phases, avoiding redesign needs later. - **Report invoice processing**: Mockup requirements must be finalized by Monday to meet the October deadline, with delays potentially pushing delivery to early November. - **Parking lot approach**: Long-term initiatives like HubSpot-Jira integration were acknowledged as Q1 priorities rather than October deliverables to maintain focus. ## Integrity Check Error Resolution Strategy A major reprioritization occurred for integrity check errors, now recognized as the quarter's most critical initiative. The approach was restructured into three distinct workstreams with clear ownership: 1. **Issue identification**: Led by Faizal across all teams, targeting October completion. 2. **Solution documentation**: Cross-team effort to formalize fixes (mix of manual processes and knowledge sharing). 3. **Issue resolution**: Ownership assigned to specific teams (UBM Ops, UBM Dev, DS Ops, DS Dev, Pay) based on problem type after documentation. - **Realistic timeline**: Only identification is expected in October; documentation and resolution will extend into November/December due to the two-week timeframe. ## Timeline Management and Accountability A proactive process was defined to handle schedule slippage and dependencies: - **Weekly realism checks**: Each meeting will assess if deliverables remain feasible based on prerequisite completion (e.g., requirement gathering). - **Transparent rescheduling**: Items delayed by dependency issues will be formally moved to later months with documented reasons (e.g., "moved to November 1st week due to delayed requirements"). - **Intervention protocols**: Two paths for blocked items: facilitator-led working sessions for complex collaboration needs or owner-driven asynchronous resolution for simpler tasks. ## Cross-Team Dependency Coordination Specific mechanisms were proposed to prevent misalignment between interdependent teams: - **Phase work visibility**: Faizal will observe Mike's HubSpot development (phases 1-2) via either async updates or brief syncs to ensure compatibility with future Jira integration needs. - **Requirement validation**: Owners (Terra/Tim) must formally confirm that requested deliverables and timelines are feasible before work begins. - **CSM inclusion**: Acknowledged CSMs (reporting to Terra) as a sixth stakeholder group for certain integrity check items, ensuring comprehensive coverage.
Daily Progress Meeting
## Performance Challenges and Target Goals The meeting focused on addressing suboptimal daily invoice processing volumes, with a target of 1500 invoices per day not being met. This shortfall directly impacts customer experience, as delayed invoice retrieval leads to billing issues and potential service disconnections. Key factors hindering progress include inconsistent portal accessibility and unclear workload allocation. - **Critical impact of delays**: Unprocessed invoices prevent timely bill payments, causing customer disruptions and disconnections. - **Daily target benchmark**: The team aims to process 1500 invoices daily to meet operational and customer obligations. - **Current performance gap**: Existing output falls significantly below the 1500 target, necessitating immediate process improvements. ## Technical Access Issues and Portal Reliability Persistent technical barriers prevent consistent access to supplier portals for invoice downloads. While routing traffic through Virginia has resolved some geographic IP issues, new errors unrelated to location have emerged. A systematic approach is underway to diagnose whether failures originate internally or from external portal issues. - **Infrastructure improvements**: Traffic rerouting to Virginia IPs resolved initial access problems for many previously inaccessible portals. - **Ongoing diagnostics**: New errors are being triaged to determine if they stem from internal systems or external portal failures. - **Credential volatility**: Frequent customer credential changes contribute to access instability, complicating consistent retrieval efforts. - **Actionable tracking**: A dedicated list of non-functional portals is being compiled to isolate unresolved technical bottlenecks. ## Metric Calculation and Performance Visibility Discrepancies exist in how productivity metrics are calculated between teams. One method counts only fully processed invoices, while another includes "snoozed" accounts (where agents attempted access but found no bill) and manually reported closed accounts. This inconsistency obscures true agent effort and output. - **Divergent counting methodologies**: - *Exclusive method*: Counts only accounts where agents successfully accessed and downloaded invoices. - *Inclusive method*: Counts snoozed accounts (valid effort where bills were unavailable) and manually flagged closed accounts. - **Impact on agent morale**: Agents consistently hitting 100+ processed invoices feel penalized when unreported snoozes/closures mask their effort. - **Solution in progress**: Access to a shared activity log report will provide unified visibility into snoozes, downloads, and exceptions, creating a single "source of truth." ## Workload Allocation and Prioritization Inefficient distribution of invoice processing tasks contributes to the target shortfall. Agents receive varying volumes per vendor (e.g., 200 for one vendor, 70 for another), with no clear system to ensure 1500 actionable items are available daily. High-priority customers like Duke Energy and problematic ones like Victra require strategic assignment. - **Unbalanced workload distribution**: Agents receive inconsistent daily volumes across different vendors, preventing predictable output. - **High-impact customer focus**: Duke Energy offers high daily volume potential and should be prioritized if accessible, while Victra remains problematic. - **Historical invoice opportunity**: Processing historical invoices (e.g., 2 years of non-MFA accounts) can significantly boost daily counts, as demonstrated by an agent achieving 1200 in one day. - **Coordination gap**: Lack of visibility into actionable item counts per customer/vendor hinders optimal task assignment between teams. ## Action Plan and Next Steps Concrete steps were defined to address bottlenecks. Weekly syncs will track progress, while technical and operational workstreams will run in parallel to resolve access issues and align workload management. - **Technical resolution**: The IT team will provide a definitive list of non-functional portals for joint troubleshooting of credential, MFA, or system issues. - **Operational alignment**: Teams will collaborate to standardize: - Daily distribution of 300+ actionable items per agent. - Prioritization of high-volume/high-success-rate vendors (e.g., Duke). - Inclusion of historical invoice batches where feasible. - **Transparent reporting**: All teams will use the shared activity log to track snoozes, downloads, and manual closures, ensuring consistent performance measurement. Discrepancies in the report will be flagged for immediate investigation.
AP file formats - Need to Huddle up Internally re Banyan Clients
## Customer The customer operates as a payment processor for real estate clients, integrating payment data with various property management platforms like Yardi, RealPage, AppFolio, and Entrada. Their core function involves receiving JSON payment files, transforming this data via an internal integration tool, and delivering client-specific formats compatible with each client's unique ERP system. They lack dedicated IT resources at the client level for significant file manipulation, relying instead on standardized or minimally customized outputs. ## Success A key achievement centers on enabling seamless integration with major real estate management platforms without requiring deep client-side IT involvement. The product successfully delivers payment data in a JSON format that the customer can adapt and push into diverse systems like Yardi or AppFolio. This capability has been a significant differentiator, allowing the customer to onboard clients efficiently by avoiding complex, resource-intensive IT integrations during initial setup. The flexibility to handle data transformation internally has proven valuable for their operational model. ## Challenge The primary challenge revolves around managing expectations and processes for payment file customization. While the customer requires specific file formats (column names, structures, capitalization) to ensure error-free ingestion into client ERP systems, the current approach to handling deviations from standard templates is unsustainable: - **Unclear Customization Boundaries:** Minor changes (e.g., renaming "Location" to "Site Name," altering capitalization) are currently treated as custom development work, causing delays and resource strain. The lack of a clear definition for "minor" versus "custom" changes leads to friction. - **Resource Constraints & Delays:** The customer's clients typically lack IT resources to perform even simple file modifications themselves. Relying on the product team for all adjustments creates bottlenecks, unpredictable timelines, and frustration, especially when compared to competitors offering faster turnaround. - **Sales vs. Implementation Misalignment:** Past sales discussions created an expectation that the product could develop "anything needed" for any platform at no extra cost or delay. The reality of requiring development effort for non-standard files clashes with this initial positioning, causing credibility issues. - **Scalability of "Standards":** Defining a set of 10 standard files (e.g., one per major platform like Yardi NG, AppFolio vX) is complicated by platform version differences and the sheer volume of potential minor variations requested, threatening to make the "standard" list unmanageable. ## Goals The customer aims to achieve several key objectives to streamline operations and meet client demands effectively: - Establish a clear, finite set of standardized payment file templates covering the most common real estate platforms and versions (e.g., Yardi Voyager, AppFolio Standard, RealPage specific format). - Define a transparent, efficient process for handling customization requests outside the standard set, including: - Clear criteria distinguishing minor tweaks (potentially self-service) from custom development. - Realistic timelines and potential costs associated with custom work. - Develop self-service capabilities (e.g., a UI wizard) allowing the customer or their clients to make simple adjustments (e.g., column header renaming, field ordering) to standard templates without developer intervention. - Improve alignment between sales promises and implementation realities by providing sales teams with definitive collateral outlining the standard offerings and the process/cost implications of customization upfront. - Reduce onboarding and file delivery timelines for clients using standard templates to remain competitive, particularly against providers like Conservice.
Simon Account list
## Summary ### LOA Process and Documentation A standardized Letter of Authorization (LOA) process is essential for streamlining utility account setups, particularly for entities with complex ownership structures. The goal is to create a single LOA referencing all legal entities per utility jurisdiction, avoiding repetitive paperwork. Key considerations include: - **Addressing letterhead challenges**: Proposing a supplemental document clarifying ownership structures to simplify authorization without requiring entity-specific letterheads. - **Defining responsibility boundaries**: Creating a process document to delineate when internal teams hand off tasks versus when external intervention is needed (e.g., after exhausting options like W9s or tax ID verification). ### Account List Verification and Onboarding Validating the master account list (48,000+ accounts) is the critical first step before any technical onboarding can proceed. This involves: - **Reconciling discrepancies**: Identifying ~1,600 accounts with no invoices since May 2023 to confirm active status or supplier changes. - **Leveraging existing data**: Using NG’s exported records (including platform, region, and square footage attributes) as the primary source, with corrections applied during onboarding. - **Handling summary billing accounts**: These require unique mapping but won’t delay core onboarding if inactive. ### Portal Access and EDI Setup Strategy Overcoming login bottlenecks for EDI accounts requires phased execution due to dependency on offboarding from the current provider (NG): - **Prioritizing non-EDI accounts**: Starting with PDF bill processing for non-EDI vendors immediately after account list validation. - **Deferring EDI logins**: Accessing NG portal credentials during December/January offboarding, using downloaded bill images until then. - **Manual fallback**: If logins remain unavailable by January, initiating credential resets as a contingency. ### Onboarding Workflow Sequencing Three parallel workstreams must align to enable bill payments by Q1 2024, with account onboarding being the longest lead item: 1. **Account validation**: Confirming location/account lists (e.g., correcting legacy errors like misspelled city names). 2. **Portal setup**: Creating vendor portals for EDI accounts after initial bill capture. 3. **Payment file integration**: Currently on hold due to external factors but critical for operational readiness. ### Cross-Team Coordination Engaging the Energy Billing/AP/AR team is vital for resolving granular account issues and maintaining momentum: - **Validating dormant accounts**: Their expertise is needed to verify inactive accounts and supplier changes. - **Technical call integration**: Scheduling dedicated sessions next week to review account anomalies (e.g., subsidiaries without active data). - **Shared drive optimization**: Establishing dedicated folders for Navigator uploads to streamline document exchange. ### Next Steps and Dependencies Immediate priorities focus on unlocking onboarding bottlenecks through collaborative reviews: - **Account list sign-off**: Finalizing location/account validation by early next week to trigger PDF processing for non-EDI accounts. - **Energy Billing team involvement**: Ensuring their participation in technical calls to address region mismatches, square footage errors, and entity-specific quirks. - **LOA template refinement**: Circulating an updated LOA draft incorporating subsidiary references for Texas utilities.
UBM Error
## Constellation Invoice Processing System Discussions focused on improving the Constellation invoice retrieval workflow. Current challenges include manual customer ID queries and lack of automation for 50,000+ customers. A solution was proposed: - **Automate invoice pulls** by integrating Constellation IDs into HubSpot, enabling system-driven queries without developer intervention. - **Leverage existing infrastructure** from carbon accounting embedded systems where CSMs already provide contract IDs, minimizing new code requirements. - **Upcoming alignment meeting** to define timelines, assign responsibilities, and validate the new build process. - -- ## Error Management and Prioritization Framework The team is restructuring how error resolution is tracked and communicated: - **Three-phase approach**: - *Identification*: Document all errors (e.g., integrity checks, data verification issues). - *Short-term mitigation*: Address critical errors immediately to reduce backlog and system failures. - *Long-term solutions*: Redesign foundational systems (e.g., virtual account rules) for sustainable fixes, though these require significant time. - **Resource constraints** limit long-term work until 2025, with immediate focus on high-impact errors. - **Transparency with leadership** is essential to set realistic expectations about resolution timelines. - -- ## Reporting System Enhancements New report requests were evaluated against development capacity: - **Immediate priorities**: - *Bill due amount/due date reports*: Simple field additions enabling customer/location filtering (e.g., Victra use case). - *User activity tracking*: Foundational work needed before generating new reports, as activity data isn’t currently logged. - **Pushback on low-priority requests**: Reports like "list columns needed" or niche bill validations were deferred to prevent distraction from core error-resolution work. - **Standardization required**: Requesters must provide exact specifications (e.g., sample Excel outputs) to avoid developer guesswork. - -- ## Team Structure and Workflow Clarification Operational roles were mapped to prevent ownership ambiguity: - **Five distinct groups** collaborate on onboarding and issue resolution: Data Services Ops/Dev, UBM Ops/Dev, and CSMs. - **CSMs quarterback client onboarding**, coordinating with dev/product/ingestion teams, while Data Services handles backend processing. - **JIRA-HubSpot integration**: Transitioning from email to HubSpot for ticket routing, with fields like "customer ID" mandatory to avoid blind assignments. - -- ## Virtual Account Rule Review Potential changes to virtual account handling were discussed: - **Error-driven relaxation**: Adjust rules tied to specific error codes (e.g., SQL-based conditions) to prevent build-stopping failures. - **Risk assessment**: Foundational ID system changes (e.g., legacy one-to-many relationships) are high-risk and deferred. - **Customer-specific relevance**: Errors must be evaluated per use case (e.g., bill-pay vs. analytics) during documentation. - -- ## Workload and Expectation Management Key strategies to balance capacity: - **New engineer onboarding**: Assign straightforward tasks (e.g., report generation) to free senior developers for complex error resolution. - **Report request freeze**: Cap ad-hoc report demands post-October to focus 90% of dev effort on integrity checks. - **Timeline adjustments**: Several initiatives (e.g., user activity tracking) pushed to November due to current bandwidth constraints.
DSS Check-in
## Summary ### Data Services Freezing Issues Investigations revealed persistent system freezing stems from legacy code issues rather than memory constraints, with potential short-term fixes being explored. - **Root cause analysis**: The freezing occurs due to inefficient code loops causing excessive token refreshes (2 million per hour), not resource limitations since virtual machines operate below 70-80% memory capacity. - **Infrastructure adjustments**: Virtual machines currently support two operators each, but freezing persists even during single-operator sessions, confirming code-level defects. - **Collaborative troubleshooting**: Engineers are examining specific code segments to identify loop triggers, though a long-term code overhaul remains necessary. ### Account Number Mismatch in DSS Operators frequently assign invoices to incorrect accounts during web downloads, exacerbated by interface flaws and insufficient validation. - **Web downloader instability**: Task lists randomly reorder during refreshes, causing operators to misassign invoices to wrong master accounts. - **Systematic validation gap**: DSS currently accepts operator-input account numbers as authoritative, creating erroneous sub-accounts under master accounts instead of flagging mismatches with extracted invoice data. - **Proposed workflow change**: Implementing a validation step to route mismatched invoices back to account setup for manual correction before processing. ### Data Acquisition Backlog A critical bottleneck exists with 3,270 invoices stuck in data acquisition, exceeding system processing thresholds. - **Capacity limitations**: The current infrastructure caps daily processing at 2,800 invoices, creating unsustainable backlogs during peak volumes. - **Prioritization challenges**: Invoices queue chronologically without manual reprioritization options, delaying urgent tasks despite operational pressures. - **Short-term mitigation**: Engineers are assessing temporary scaling solutions while acknowledging long-term architectural upgrades are essential. ### UBM Error Patterns Manual errors during high-volume invoice processing highlight the need for systemic safeguards. - **Error frequency**: 2-3 recurring incidents weekly involve incorrect account assignments, primarily from operator fatigue during bulk processing. - **Multi-layer solutions**: Proposals include operator retraining, code enhancements to prevent sub-account creation for mismatched invoices, and UI fixes for task-list stabilization. - **Upcoming review**: A dedicated session is scheduled to analyze top UBM errors, focusing on replicable prevention strategies. ### DSS Functional Enhancements Key improvements target data extraction accuracy, particularly for complex billing elements. - **Total charge reconciliation**: Developing checks to validate line-item sums against total charges, addressing discrepancies from unprocessed late fees or credits. - **Commodity safeguards**: Adding validations for propane/natural gas service types to prevent misclassification by cross-referencing extracted data against approved service items. - **Noise reduction**: Refining AI prompts to ignore non-charge descriptive text (e.g., "delivery challenges include") that currently creates phantom line items. ### Analytics Reporting Gaps Critical vendor/supplier data disappeared from the late bills report, impairing operational visibility. - **UI regression**: The vendor field vanished from the analytics interface without explanation, hindering team members from identifying delinquency patterns. - **Troubleshooting approach**: Engineers requested specific reproduction steps and screenshots to diagnose whether the issue stems from frontend display errors or backend data pipeline faults. ### Credential Management Operational delays occur due to credentialing bottlenecks for utility portals. - **Access barriers**: Manual credential setup for vendor portals (e.g., F40, SHS) consumes significant time, diverting resources from core tasks. - **Delegation strategy**: Exploring outsourced support for credential provisioning to accelerate onboarding and reduce internal workload.
Total Charges Review
## DDI Infrastructure Improvements and Issue Resolution Recent infrastructure changes aim to address recurring DDI issues, though code-related problems may persist. Matthew is leading short-term solution development and may request additional details about specific failures. ## Total Charges and Line Item Processing Logic Key billing logic principles were clarified for accurate charge capture: - **Late fee inclusion**: Late fees must be factored into total charge calculations. - **Subtotal handling**: Delivery charge subtotals (e.g., $174.40) should be excluded when individual line items (customer charge, usage-based charge) are already captured to avoid duplication. - **Manual updates**: Operators primarily modify line items via DDAs rather than DSS UI, though DSS may be used for error corrections. - **Charge validation**: Total current charges must align with the sum of individual line items (e.g., $32,183 + $344). ## Credit Balance Capture Challenges Persistent inaccuracies exist in capturing credit balances: - **System limitations**: Only \~20% of bills currently reflect credit balances correctly (e.g., a -$383.31 credit appearing as $0). - **Root cause**: Mismanagement stems from LLM processing flaws rather than OCR errors, requiring dedicated investigation. ## Energy Observation Type Corrections Categorization fixes are needed for natural gas and propane billing: - **Natural gas**: Incorrect "generation" labels must be replaced with "wellhead" as the valid observation type. - **Propane**: Unsupported observation types (e.g., "use," "build," "use cost") require new safeguards to prevent miscategorization. - **Service type standardization**: "natural_gas_here" was confirmed as the correct service type format. ## Follow-Up Focus Areas Three priority workstreams were defined: 1. **Total charge composition**: Formalizing rules for charge aggregation and subtotal exclusion. 2. **Credit handling**: Investigating systemic fixes for negative balance capture. 3. **Propane safeguards**: Implementing validation for propane observation types.\ Natural gas observation updates will proceed concurrently, with propane adjustments taking precedence due to higher error frequency.
DSS Check-in
# Done ## Summary ### Performance Issues with DDS Application Significant slowdowns and instability are occurring in the DDS application, severely disrupting workflow. Users experience: - **Extended processing times**: Simple tasks like building a "meter 7 line item" take up to 20 minutes. - **Application failures**: The app frequently fails to launch, requiring multiple restart attempts. - **Recent escalation**: Problems have persisted for weeks but intensified dramatically over the past two days, preventing timely invoice processing and risking service disconnections for customers. ### Infrastructure Analysis Current VM resources are underutilized, ruling out infrastructure capacity as the primary issue: - **Resource metrics**: Memory usage consistently remains below 30% across all VMs, with ample free memory available. - **VM distribution**: Five active VMs handle 6-8 users, well below the configured limit of four users per VM. - **Scaling readiness**: Three additional VMs are on standby to activate during demand spikes, but remain unused. ### Root Cause Investigation The performance degradation is likely tied to the application's codebase rather than infrastructure: - **Long-standing technical debt**: The application has been in use for over 20 years, with recent integrations (PSS/IPS) potentially exacerbating inefficiencies. - **Lack of profiling**: No code-level diagnostics have been performed to identify bottlenecks, making infrastructure adjustments a temporary workaround. - **Symptom speculation**: High resource utilization thresholds (e.g., 80%) might trigger failures due to unoptimized code, but this remains unverified. ### Proposed Short-Term Mitigation To alleviate immediate user impact, a VM reconfiguration will be tested: - **User density reduction**: Decreasing users per VM from four to two to isolate performance issues. - **Infrastructure scaling**: Spinning up five additional VMs (totaling ten active) to accommodate the same user load. - **Implementation constraints**: Changes require PowerShell scripting and disabling auto-scaling policies, preventing instant deployment. The update will occur during off-peak hours. ### Cost Implications Adjusting infrastructure will increase operational expenses: - **Current baseline**: VM costs average \~$1,000/month, fluctuating due to nightly shutdowns. - **Projected increase**: The new configuration may raise costs to \~$1,500/month, deemed acceptable for short-term relief given operational urgency. ### Additional System Concerns Secondary issues were briefly noted but not deeply explored: - **FDG Connect performance**: Users report slowness, though specifics were unclear during the discussion. - **Access permissions**: New engineers lack Azure access, requiring follow-up.
DSS Alignment
## Summary ### Onboarding and Documentation Efforts are underway to onboard new engineers onto the DSS team, focusing on accelerating their productivity through structured training and documentation. - **Training sessions**: Recurring meetings are planned to provide hands-on guidance, allowing new team members to gather questions and clarify system intricacies over the next 1-2 weeks. - **Documentation gaps**: Existing materials require expansion to cover evolving system changes, with new engineers encouraged to draft preliminary documentation for validation by experienced developers. - **Knowledge transfer**: Emphasis on capturing institutional knowledge to prevent single points of failure, especially as legacy processes evolve and new team members integrate. ### Root Cause Analysis of Invoice Errors Persistent invoice processing errors necessitate deeper investigation to distinguish DSS-related issues from upstream data or operational flaws. - **Misattributed errors**: Many flagged issues (e.g., incorrect account mappings or negative charge handling) originate outside DSS, such as during data entry or upload phases. - **Critical bug identification**: A confirmed DSS flaw involves mishandling invoice credits (negative charges), causing payment total discrepancies that require prioritization for fixes. - **Visibility solutions**: Proposals include building diagnostic reports to trace invoice journeys, categorizing errors by ownership (e.g., DSS vs. operations), and accelerating root cause resolution. ### Interpreter Layer for DSS-UBM Integration A translation service between DSS and UBM is proposed to resolve compatibility issues stemming from UBM’s rigid virtual account structure. - **Data massaging**: The layer would preprocess DSS output to align with UBM’s requirements (e.g., standardizing formats for virtual account IDs, spaces, and dashes), reducing integrity check failures. - **Architectural approach**: Leveraging existing frameworks (like Raikarov’s multi-stage validation) to introduce a third validation step, cross-referencing UBM’s database for real-time matching. - **Urgency**: This addresses recurring mismatches causing invoice backlogs, with design discussions prioritized to avoid compounding delays. ### Integrity Check Backlog Management Thousands of invoices are stalled in integrity checks due to virtual account ID mismatches and legacy data inconsistencies, demanding immediate and long-term strategies. - **Short-term triage**: Manual bulk updates and error-code tagging are underway for high-frequency issues (e.g., location or utility-type mismatches), targeting a 30% backlog reduction. - **Systematic resolution**: Developing automated workflows to reprocess or flag invoices during validation, minimizing manual intervention for recurring error patterns. - **Coordination challenges**: Reprocessing requires alignment with legacy systems to prevent duplication, while new UBM integrations must accommodate forward-looking fixes. ### Strategic Resource and Roadmap Alignment Resource allocation and future planning focus on de-risking external dependencies and embedding DSS into broader organizational goals. - **Internal capability building**: Shifting from Cognizant reliance to repurposing internal IT talent, aiming for U.S.-based support and faster response times. - **Five-year integration**: Advocating for DSS’s inclusion in Constellation’s long-term infrastructure strategy to attract cross-departmental adoption and resource sharing. - **Risk mitigation**: Diversifying team expertise to eliminate bottlenecks, with documentation and training ensuring sustainability amid staffing fluctuations.
DSS Code Setup
## Summary ### Access Setup and Permissions Coordination The team focused on granting necessary system access to new engineers while maintaining security protocols: - **Secure credential sharing**: One-time passwords will be distributed individually via email to avoid exposure in group threads. - **Infrastructure access**: Permissions for IPS and DSS systems were prioritized, with ADO (Azure DevOps) access pending license assignments. - **Resource provisioning**: Engineers will receive incremental access to components like code repositories immediately, with additional permissions granted upon demonstrated need. - -- ### Technical Knowledge Transfer A structured onboarding plan was established to familiarize new engineers with critical systems: - **DSS overview session**: Scheduled for 9 AM tomorrow, focusing on system architecture and workflows. This session will be recorded for future reference. - **Documentation strategy**: Existing technical documentation will be supplemented with session recordings, creating a self-service knowledge base. - -- ### End-of-Year Support Planning Operational continuity was ensured through year-end availability commitments: - **Support coverage**: Key team members remain available through December 31st to address access issues or technical queries. - **Escalation paths**: Direct communication channels were confirmed for urgent infrastructure-related requests requiring immediate attention. - -- ### Collaboration Workflow Confirmation The team aligned on standard operating procedures for cross-functional tasks: - **Email protocol**: All access-related communications will use encrypted channels and BCC fields when sharing sensitive credentials. - **Version control coordination**: New engineers will receive read-only repository access initially, with write permissions contingent on code review performance.
Q4 Alignment Meeting
## Project Setup and Knowledge Transfer Efforts are underway to resolve setup issues with the Beeline codebase through collaborative troubleshooting. Matthew provided an initial overview, but Shake encountered unresolved configuration problems during setup. A dedicated session between Shake and Matthew is scheduled to achieve full environment readiness, acknowledging that Matthew may need time to recall setup specifics due to infrequent deployments. ## SOC Compliance Process Challenges The current SOC compliance workflow requires significant manual effort, consuming approximately 20 hours per person quarterly for screenshot documentation and synchronous Excel reviews. This unsustainable process diverts critical resources from development priorities. Exploration of automation solutions with vendors is planned to reduce this overhead, particularly for recurring audit requirements. ## Data Integrity and Error Management Addressing DSS-related integrity check errors involves a multi-phase approach: - **Error root cause analysis**: Identifying whether errors originate from DSS, UBM Ops, or other systems, with documentation of error clusters (e.g., bash errors, UBM field list issues) as an ongoing effort. - **Interpreter layer development**: Creating a translation logic between DSS and UBM outputs, initially implemented within existing workflows rather than as a standalone solution. Shake leads this with Ruben's support, targeting initial functionality by quarter-end. - **Ownership assignment**: Establishing clear responsibility matrices for error resolution, distinguishing between DSS-fixable issues and those requiring UBM team intervention. ## Resource Allocation and Knowledge Gaps Critical dependencies on key personnel create operational risks: - **Ruben's central role**: As the primary expert for UBM-DSS integration, his upcoming two-week absence highlights urgent need for backup resources. Plans include shadowing arrangements and accelerated knowledge transfer to mitigate single-point-of-failure risks. - **Gaurav's extended involvement**: His role is confirmed through year-end to manage new engineers, lead onboarding, and handle complex integration issues. This bridges capacity gaps until new hires are fully operational, with expectations set for selective use of his senior expertise to avoid over-reliance. - **Documentation improvements**: Shake will conduct recorded sessions for new developers covering DSS architecture, local setup, and testing procedures to accelerate ramp-up, complementing existing documentation gaps. ## UBM Strategic Priorities and Reporting Q4 priorities focus on stabilizing core functionalities while planning structural changes: - **Immediate invoice integrity**: Ensuring no missed payments through temporary workarounds, with awareness that these may require later remediation. - **AP file impact mitigation**: Protecting postpaid customer payment systems by reverting high-risk rule changes, requiring collaboration with UBM Ops to identify critical dependencies. - **Reporting enhancements**: Developing Power BI reports for error monitoring, adopting an iterative approach where initial versions provide baseline functionality (e.g., sorting/filtering tweaks) with phased refinements. The *System Status Report* is flagged as a high-priority deliverable. - **Customer view separation**: Initiating design for distinct UBM interfaces for prepaid vs. postpaid customers in Q4, recognizing that shared views cause recurring issues. This involves exploring separate data pipelines or access controls. ## Onboarding Process Constraints Current onboarding demands excessive developer involvement for technical tasks like field mapping, diverting resources from strategic work. While contractual obligations prevent immediate pausing, a process overhaul is planned for Q1 to shift responsibilities to CSMs through: - Simplified mapping workflows - Clear role definitions between CSMs and developers - Documentation enabling junior developers to handle routine onboarding tasks
Error Validation Priorities
## Summary ### Priority Issues and Solutions The team is prioritizing resolution of issues recently highlighted by Afton, focusing on natural gas-related discrepancies and bill validation errors. - **Natural gas data inconsistencies**: Actively investigating mismatches between expected and actual data, particularly concerning generation metrics and wellhead configurations. - **Bill validation enhancements**: Implementing fixes for critical errors affecting invoice processing, excluding one complex issue requiring a long-term solution. - **Inactive meter challenge**: Acknowledged a systemic limitation where DSS assigns bills to inactive meters due to lack of two-way communication; no immediate solution planned. Instead, leveraging existing FDG Connect functionality to manually mark entire accounts as inactive. ### Batch Processing Workflow Optimization Discussed inefficiencies in the batch processing system where a single error halts an entire batch of invoices. - **Current limitation**: If one invoice in a batch (e.g., 100 invoices) fails during "ready to send" status, the entire batch stalls, delaying all other valid invoices. - **Proposed improvement**: System should isolate and fail only problematic invoices while allowing others to proceed. A feasibility assessment is pending to determine if this is a quick configuration tweak or a major redesign. ### Error Validation Documentation Initiated a cross-functional effort to document and automate error resolution processes. - **Goal**: Systematically map manual fixes performed by operations teams to enable automated solutions, reducing manual intervention. - **Action plan**: - Prioritize "green"-tagged errors (e.g., error code 20191) where due dates are missing. Proposed auto-fix: Populate due date using the invoice date when possible. - Review Victra-specific exceptions (e.g., error 2021), which currently lack auto-fixes due to past customizations. Consensus reached to enable auto-fixing for Victra. - Schedule a dedicated session with operations (Tara, Tim) and engineering to document resolution logic for high-priority errors, using Whimsical for collaboration. ### User Activity Reporting Requirements gathering for a new user activity report is underway. - **Approach**: Finalizing short-term and long-term reporting needs with Afton, focusing on essential metrics first. - **Review process**: Operations team will validate requirements before development begins to ensure all critical data points are included. ### Technical Adjustments and Follow-ups Several tactical items were confirmed for immediate action. - **Auto-fix deployment**: Enabling error 2021 resolution for Victra bills by adjusting system configurations. - **Bulk updates**: Planning bulk updates to address data audit issues once team members return. - **Meeting coordination**: Rescheduling error-documentation workshops to accommodate participant availability, with examples required for efficient discussion.
Follow-up with Navigator & Microsoft Team
## Summary ### PG Pool Implementation for Postgres The meeting focused on adopting PG Pool, a specialized load balancing solution designed by the Postgres team, to replace existing infrastructure. - **Replacement of current load balancers**: Standard solutions like F5 are inadequate due to unique TLS protocol requirements, necessitating PG Pool for efficient Postgres SQL management. - **Architectural adjustments**: PG Pool will be hosted either on a separate VM or within the Kubernetes cluster alongside the application, eliminating the need for the existing app gateway in the Postgres component. - **Azure compatibility confirmed**: No anticipated certification or compliance issues within Azure, as the solution will be containerized and managed internally. ### Deployment Strategy and Testing Deployment plans for PG Pool in the Constellation tenant were outlined, emphasizing immediate next steps. - **Initial setup**: Requires implementation in the Constellation Azure tenant, with an estimated effort of two days pending permission and access hurdles. - **Validation process**: Deployment will occur within the Constellation environment, followed by integration testing to ensure functionality. - **Pipeline readiness**: Existing microservice pipelines will be leveraged once core infrastructure is operational, enabling developer involvement for subsequent deployments. ### Timeline and Resource Constraints Current priorities and scheduling challenges were addressed, impacting the PG Pool rollout. - **Immediate delays**: Key personnel are allocated to SOC audit processes this week, limiting bandwidth for migration tasks. - **Access verification**: Testing access to the container registry and Kubernetes cluster is pending, with results expected imminently to unblock progress. - **Steering committee update**: A snapshot of deployment status will be prepared for an upcoming steering discussion to align stakeholders. ### Security and Compliance Alignment Concerns regarding organizational approval for PG Pool were discussed, with a focus on risk mitigation. - **Formal acceptance needed**: Constellation’s security team must review and approve PG Pool since it’s a third-party tool outside standard supported services. - **Upcoming security review**: A dedicated meeting with security stakeholders is scheduled for Thursday to address compliance, documentation, and support implications. - **Risk assessment**: The tool falls under "application category" governance, similar to Node.js in AKS, requiring explicit validation to proceed. ### Architectural Dependencies Clarifications were provided on supplementary infrastructure needs beyond PG Pool. - **App gateway retention**: The existing application gateway for Kubernetes access remains necessary and will be configured post-initial deployment. - **Simplified setup**: Configuring the gateway is low-effort (estimated at a few hours) and will be prioritized after the first workload deployment. - **No technical blockers**: Outside of priority conflicts, no significant technical hurdles are anticipated, though the learning curve for new tools remains steep. ### Next Steps and Collaboration Coordination plans were solidified to advance the initiative. - **Developer engagement**: Teams will commence work once core pipelines and PG Pool are functional, focusing on microservice deployment. - **Cross-team alignment**: Ongoing collaboration with infrastructure and security teams will ensure architectural consistency and compliance. - **Thursday’s security meeting**: Critical for formalizing PG Pool approval and addressing authentication concerns like PostgreSQL password management.
DSS Check-in
## Summary of Key Meeting Topics ### Data Services System (DSS) Output Issues Persistent errors in DSS are causing downstream billing failures, particularly for bill-pay customers. Key problems include: - **Incorrect observation types**: DSS captures invalid metrics (e.g., "generation" for natural gas instead of "wellhead"), leading to blocked output. This occurs because DSS ignores commodity-specific rules defined in DDAS (e.g., propane should only allow "use" and "use cost"). - **Inactive meter processing**: DSS pulls data from deactivated meters/sub-accounts, forcing manual intervention to clear bills. - **Missing critical dates**: Bills lacking invoice/due dates or service periods get stuck in output. Proposed solution: route these to Data Audit for manual entry. - **Same-day service periods**: DSS fails to reference prior bill data for delivery-based commodities (e.g., propane), creating illogical service intervals. ### Natural Gas and Propane Data Conflicts Specific commodities face recurring DSS misconfigurations: - **Natural gas**: "Generation" observation types (electric-only) are incorrectly applied. A mass update to replace "generation" with "wellhead" is proposed. - **Propane**: Unassigned service items (e.g., "use build") appear despite DDAS restrictions. DSS must align with predefined service-item lists. *Solution path*: Reference DDAS service-setup tables to enforce commodity-specific rules in DSS. ### Inactive Account Management Processing closed accounts risks erroneous payments: - **Exclusion protocol**: Use FTV Connect to block entire inactive accounts (e.g., former clients like JVM). - **System limitation**: DSS lacks visibility into DDAS inactivation status. No two-way sync exists to auto-ignore inactive meters/accounts. ### Data Validation and Workflow Adjustments Urgent fixes to streamline billing operations: - **Date handling**: Missing invoice/due dates should default to end-service dates or trigger Data Audit holds. - **Mass updates**: Historical data corrections (e.g., observation types) require bulk edits-previously handled via backend scripts. - **Documentation gap**: Commodity-specific observation types aren’t centrally documented; screenshots of DDAS setups will be provided for reference. ### Next Steps for Core Fixes Priorities for engineering resolution: - **Observation type enforcement**: Restrict DSS to DDAS-defined service items per commodity. - **Inactive meter filtering**: Develop logic to exclude inactive meters/sub-accounts during processing. - **Date validation**: Automatically flag bills with missing dates for audit. *Immediate action*: Address natural gas/propane issues first via mass updates and configuration tweaks.
Reviewing Development work
## Quarterly Priority: Invoice Backlog Resolution Clearing the invoice backlog is the top priority this quarter, driven by systemic errors requiring cross-team collaboration. The backlog stems from issues across multiple systems (DSS, UBM, and OPS), necessitating root-cause analysis and task delegation. - **Error diagnosis and assignment**: Each invoice error must be categorized by origin (DSS, UBM, or OPS) and assigned to the relevant team for resolution. - **Cross-functional coordination**: Resolution requires synchronized efforts between Data Services, UBM Development, and Operations teams to prevent recurrence. ## Error Analysis Strategy A granular approach to error categorization will identify patterns and ownership, enabling targeted fixes. - **Multi-system audit**: Errors may originate from data discrepancies in DSS, processing gaps in UBM, or operational oversights in OPS. - **Preventive measures**: Solutions will address both immediate fixes and systemic safeguards to minimize future backlogs. ## October Deliverables and Workload Assessment Concerns exist about capacity for concurrent October initiatives, prompting discussions on feasibility and phasing. - **Progress tracking**: Existing projects are in varying stages (e.g., operator visibility tools near deployment). - **Phased implementation**: - *Short-term*: Accept interim solutions like data dumps for critical needs, enabling manual analysis via pivot tables. - *Long-term*: Develop automated reports by December to replace manual processes. - **Resource alignment**: License availability and team bandwidth (e.g., Mike Winters’ workload) are being verified to avoid bottlenecks. ## Progress Visibility and Reporting A centralized system will highlight completed work to ensure team contributions are recognized. - **Accomplishment tracking**: Monthly slides will catalog completed initiatives, countering perceptions of low output. - **Visual roadmap**: Dependencies between teams (e.g., OPS-Development collaborations) will be mapped to clarify workflows. ## Workshop Integration Upcoming sessions will refine requirements but won’t overload timelines. - **Metrics validation**: A follow-up workshop will finalize report metrics, with scope adjustments if complexity demands phased delivery. - **Non-intrusive scheduling**: Workshop dates (e.g., Tara’s two-day session) will be noted without inflating timelines. ## Timeline Management Approach Flexible quarterly planning accommodates spillover while maintaining momentum. - **Milestone adjustments**: October deliverables may extend into November if needed, with December earmarked for polished outputs. - **Incremental reviews**: Biweekly check-ins will assess progress, reprioritize tasks, and shift deadlines transparently. - **Documentation ownership**: A dedicated lead consolidates all inputs into dynamic slides for team alignment.
Output Service Overview
## Summary ### Team Changes and Focus The team has undergone recent changes, with new leadership now overseeing data services (DDS) and plans to expand responsibilities to UBM in the future. The primary focus remains on resolving ongoing technical issues, particularly around the Output Service. ### Output Service Codebase and Structure The Output Service code resides within the **DDAS repository** under the `Services` folder, alongside legacy services like Directory Watcher and Lock Management. Key details include: - **Current Usage**: Only the `Pair AI module` is actively used; Energy Cap and Dude Solutions modules appear inactive. - **Service Architecture**: Runs as a Windows service on the `Apple One` server, pulling client-specific configurations from a database table (`system_variables`). - **Code Organization**: The solution requires modernization, including potential separation into individual repositories for easier maintenance and deployment. ### Configuration and Environment Management Critical updates are needed to align the Output Service with environment-specific configurations (dev/test/prod): - **Current Shortcomings**: The service lacks built-in environment support, relying solely on the `system_variables` table in the database. - **Migration Plan**: - Adopt the `appsettings.json` pattern (similar to the Container App project) to manage configurations independently of the database. - Add environment variables (e.g., `DDAS_ENVIRONMENT`) to dynamically load settings. - **Urgency**: Without this update, new builds will fail due to unresolved configuration dependencies, blocking deployments. ### Deployment Process Deployment is currently manual and requires optimization: - **Manual Steps**: - Build in `Release` configuration. - Copy files to `Apple One` server, run `uninstaller.bat` and `installer.bat` to stop/update/restart the service. - Service runs under the `WCF Services` user account. - **Automation Gap**: No CI/CD pipeline exists yet; automation will be prioritized post-configuration updates. ### Local Development Setup Enabling local debugging requires configuration adjustments: - **Prerequisites**: - Replicate `appsettings.{environment}.json` files from the Container App project into the Output Service. - Initialize configuration manager in `Program.cs` to load environment-specific settings. - **Challenges**: The codebase hasn’t been tested locally recently, and environment setup may require troubleshooting connection strings and dependencies. ### Technical Upgrades and Refactoring Long-term improvements were discussed to enhance maintainability: - **Configuration Cleanup**: Audit the `system_variables` table to identify required settings (e.g., `ClientOutputLocation`, logging paths) and prune unused entries. - **Repository Structure**: Decouple the Output Service and Directory Watcher into standalone repositories. - **Dependency Management**: Update NuGet packages and align with modern frameworks (e.g., `FDG Core`). ### Immediate Next Steps - **Local Validation**: A team member will test the local setup and configuration changes, escalating issues promptly. - **Collaboration**: Knowledge transfer sessions will ensure the team can debug and modify the service without single points of failure.
AP File Standardization & Banyan
## Customer Operates in the energy management sector, focusing on invoice processing and payment systems. Their workflow involves handling utility bills across multiple locations, requiring precise GL coding and attribute management for accurate payment file generation. They rely on automated systems to process invoices efficiently and ensure timely vendor payments. ## Success The platform successfully automates invoice processing and integrates with payment systems like PayClearly, enabling streamlined funding transfers. When locations and accounts have complete attributes, invoices move seamlessly to payment files without manual intervention. This efficiency was demonstrated in cases where properly configured accounts processed payments correctly, reducing administrative overhead. ## Challenge Persistent issues arise when locations or virtual accounts lack required custom attributes (e.g., GL codes, company codes). This causes payment file errors, halting AP file generation while allowing PayClearly funding to proceed-creating reconciliation mismatches. Bills marked for payment remain stuck in limbo, with their status dates shifting daily until attributes are added. This problem surfaced prominently with Victra, where missing GL codes delayed AP files despite invoices being processed. ## Goals - Achieve 100% attribute completeness for all locations and accounts to prevent payment file errors. - Ensure all new customer onboardings (e.g., Rainier, Stone Creek) have attributes pre-loaded before go-live. - Eliminate delays between PayClearly funding and AP file generation for reconciliation accuracy. - Clarify platform capabilities to avoid misalignment with sold features, particularly regarding tariff-based rate class analysis. - Implement bulk attribute updates via exports/imports to resolve gaps efficiently.
DSS Check-in
## Summary The meeting focused on accelerating documentation efforts and addressing critical technical issues. Key priorities include ensuring all code is accessible to prevent single points of failure, with specific emphasis on documenting the output service and Apple service. Documentation for the interpreter layer was also highlighted as essential for onboarding new engineers. Technical discussions centered on resolving data validation errors, particularly the 2016 error (total charges not matching subcharges) and 2019 error (missing due date). The location mapping fields (account number, meter type, commodity, vendor code, client account, meter serial) were reviewed, with confirmation that meter serial population issues are resolved. While adding validation checks for these fields was considered, it was deferred for now. A streamlined communication process was established for resolving UBM-related queries, moving away from relayed requests to direct engagement with relevant engineers. Time sensitivity was stressed, with instructions to pivot to other tasks if responses are delayed. ## Wins - **Meter Serial Resolution**: Fixed the meter serial population issue, ensuring it now populates correctly for every bill in DSS. - **Field Validation Clarity**: Confirmed all six required fields for location mapping are now properly populated from user inputs via RTC Connect. ## Issues - **Code Accessibility Risk**: The output service code exists only on a single engineer's machine, creating a critical single point of failure. - **Documentation Gaps**: Insufficient documentation for the output service, Apple service, and interpreter layer, hindering knowledge sharing and onboarding. - **UBM Validation Logic**: Unclear integrity check logic in UBM for the 2016 error (charges/subcharges mismatch), requiring clarification. - **Error Backlog**: Pending investigation into error 2019 (missing due date) and historical error 2027 (unsupported unit measure). - **Inconsistent Fix Deployment**: Location mapping field fixes are implemented in DSS but not yet propagated to the output service. ## Commitments - **Output Service Documentation**: Assigned to the output service owner for completion this week, covering code and operational details. - **Error 2019 Investigation**: Assigned to the engineer to diagnose and resolve the missing due date issue, with follow-up after analysis. - **UBM Communication Channel**: Manager to establish direct Slack/voice channels between engineers and UBM experts (e.g., Alex) for real-time queries. - **Output Service Fix Deployment**: Team to implement location mapping field fixes in the output service urgently (target: same day).
Onboarding Pause Alignment
## Summary ### Onboarding Challenges and Prioritization Current onboarding volume poses significant scalability risks due to unresolved operational issues. Seven active onboardings are confirmed for this quarter, including high-impact customers like NetXL and Simon who represent substantial revenue opportunities but also major operational burdens. These large-scale implementations have been in progress for nearly a year, yet persistent system limitations threaten their successful deployment. - **Scalability constraints**: Manual processes and technical debt prevent efficient handling of new implementations, consuming disproportionate operational resources. - **Proposed onboarding freeze**: Temporarily halting new customer implementations would allow focus on stabilizing existing commitments, though this conflicts with revenue targets. - **Customer experience risks**: Large strategic clients expect seamless transitions, but current system instability jeopardizes satisfaction and retention. ### Strategic Direction and Quarterly Goals Clarity on product vision remains elusive, hindering effective quarterly planning. Immediate priorities center on resolving critical payment processing failures while acknowledging broader customer needs. - **Bill-pay focus**: 20% of customers (representing \~40% of invoice volume) drive 90% of operational demands due to payment processing issues. - **Downstream impact**: Delays in AP file generation disrupt clients' accounting workflows, though most use alternative payment methods. - **Vision ambiguity**: Lack of strategic alignment on whether to prioritize bill-pay customers exclusively or maintain broader service offerings creates prioritization challenges. ### Operational Process Documentation Systematic documentation of operational exceptions is underway to identify automation opportunities. A Whimsical board catalogs recurring errors and proposed solutions. - **Error pattern analysis**: Common issues like missing service addresses (error 2020) are being mapped with potential fixes like location-based fallbacks. - **Collaboration gap**: Cross-functional validation with operations teams is essential to confirm solution viability but difficult to schedule. - **Workshop requirement**: Dedicated sessions with operations personnel are needed to review real-world examples and edge cases for comprehensive solution design. ### Cross-Functional Alignment Needs Resolving systemic issues requires intensive collaboration between product, operations, and data services teams. - **Workshop urgency**: A minimum one-hour session is proposed to analyze operational pain points, requiring participants to bring concrete examples of billing errors. - **Scheduling constraints**: Operational workloads make extended meetings challenging, though fragmented discussions risk context loss. - **External facilitation**: New program management resources are being deployed to identify inter-team dependencies and success requirements. ### Product Scalability Concerns Fundamental limitations in the current architecture prevent sustainable growth, necessitating strategic decisions about service scope. - **Resource allocation conflict**: Supporting all customer use cases (bill pay, reporting, analytics) strains limited engineering capacity. - **Contractual implications**: Existing agreements obligate service delivery beyond core bill-pay functionality, creating potential compliance risks if deprioritized. - **Technical debt impact**: Short-term fixes for payment processing issues have introduced reporting inaccuracies requiring remediation.
[EXTERNAL]Updated invitation: Constellation/Arcadia @ Fri Oct 10, 2025 1pm - 1:30pm (EDT) (faisal.alahmadi@constellation.com)
## Prospect The prospect is an energy company specializing in managing utility accounts for clients, requiring solutions to aggregate, normalize, and automate utility data processing. Their operations involve handling electricity, natural gas, water, and waste utilities across North America. Key challenges include managing credential setups for customer accounts, overcoming multi-factor authentication (MFA) hurdles, and ensuring consistent data delivery from diverse utility providers. They prioritize outsourcing manual credential management to streamline onboarding and reduce internal workload, particularly for new customers transitioning from other platforms. ## Company The company focuses on utility data aggregation and credential management services, targeting enterprises with large-scale utility account portfolios. They offer: - **Utility Coverage**: Support for electricity, natural gas, water, and limited telecom/steam utilities, with ongoing expansion based on demand. - **Credential Management**: Outsourced setup of web portal credentials for utility accounts, including handling MFA workarounds (e.g., tokens, temporary passes). - **Data Delivery**: Automated extraction of PDF bills and EDI invoices, with SLA-backed uptime and data quality guarantees. - **Operational Scale**: Capability to onboard 2,500-3,000 accounts monthly without dedicated project plans; larger volumes require customized timelines. Their process involves clients providing legal documentation (LOAs), tax IDs, recent bills, and access permissions. The company then manages utility outreach via email/phone, with up to three attempts per credential setup. Failed credentials are flagged for client review. ## Priorities Critical priorities identified include: - **Credential Management Efficiency**: Outsourcing setup for new customer accounts to accelerate onboarding, with a target turnaround of two weeks for ~1,000 credentials. Requires clear inputs (LOAs, tax IDs, bills) and real-time progress tracking. - **Transparent Reporting**: Weekly client-facing reports detailing credential status (e.g., "800 connected, 200 pending"), attempt histories, and failure reasons. Exportable data with account-level granularity is essential for client updates. - **MFA Mitigation**: Proactive solutions for utilities implementing MFA, including credential resets or alternative authentication paths. Success rates depend on accurate initial data inputs. - **Utility Coverage Assurance**: Confirmation that all supported utilities (electricity, gas, water) are eligible for credential management, with flexibility to address waste utility inconsistencies. - **SLA Alignment**: Credits for undelivered data and adherence to SLAs for uptime/data quality. Documentation for non-delivery credits will be shared separately. - **Scalability Planning**: Project-based coordination for bulk onboarding (e.g., 10,000+ accounts), requiring advance notice to allocate resources.
UBM Error
## Summary ### FDG Connect File Exchange Fix A critical issue was resolved where PDF files within ZIP archives from Intel File Exchange were being incorrectly renamed. The fix ensures original filenames are preserved during processing, maintaining data integrity for downstream systems. This deployment addressed a specific client-reported bug where file identification was failing due to unexpected naming conventions. ### Service Address Processing Improvements Modifications were implemented to handle inconsistent service address formats in bill processing. The LLM logic was updated to: - Extract address components reliably even when labels like "Service Address" are absent. - Handle variations in address completeness (e.g., missing zip codes) by prioritizing available data fields like facility name, city, and state. - Document the revised prompt logic to clarify expected behavior for future reference and troubleshooting. ### UBM Location Mapping (Error 3018) Investigation Significant discussion centered on resolving the persistent "3018" error related to location mapping failures when sending data from DSS to UBM. Key points include: - The error stems from UBM's inability to create a virtual account using six specific fields sent by DSS (client account, vendor code, commodity, bill type, account code, meter serial). - Clarification is needed from UBM (specifically Alex) on the exact data expectations and validation rules for each field to ensure successful mapping. - While not necessarily a DSS data issue, potential solutions involve adding pre-validation checks within DSS before sending data to UBM or adjusting data formatting to meet UBM's requirements. A follow-up request for detailed documentation from UBM is pending. ### Data Validation and Reporting Tasks Several data validation and reporting updates were prioritized: 1. **Error 2016 (Total Charges Mismatch):** Immediate investigation was requested into recurring "2016" errors indicating discrepancies between total charges and the sum of line items on bills. This is urgent as it impacts data trustworthiness and requires analysis to determine if issues stem from recent processing or legacy data. Findings are needed before a 12:30 meeting. 2. **Download Helper Report Enhancement:** A modification was assigned to add an "Active Tasks" filter to the existing Web Download Helper report. This will align the report's view with FDG Connect's interface by excluding completed or expired tasks, improving operational visibility for users like Afton. 3. **UBM Database Report Preparation:** Initial work commenced on defining the necessary database columns and structure for an upcoming UBM-related report, laying the groundwork for its development. ### Team Coordination and Ticket Management Procedural adjustments were discussed to improve workflow efficiency: - Ensuring Bianca joins future calls for better context on ongoing work. - Emphasizing the need to create detailed Jira tickets for each distinct root cause of errors (like the multiple causes potentially behind error 2020) to facilitate clear ownership and tracking. - Documenting fixes and logic (e.g., the service address prompt changes) directly within tickets or knowledge bases for future reference.
Error Strategy Alignment
## Resource Allocation and Onboarding Strategy Addressing challenges with integrating Matthew into the team, including documentation gaps and code management issues. Key actions include: - **Structuring Matthew's October plan**: Mandating all processes documented, code migrated from local machines to Git repositories by October 31st, with daily check-ins to track progress. - **Leveraging Gaurav's expertise**: Utilizing his knowledge of DSS and IPS systems for parallel task execution, estimating ~5 hours/week support to accelerate error resolution and reduce dependency bottlenecks. - **Mitigating knowledge gaps**: Acknowledging limited internal understanding of legacy systems (e.g., UBM, DDS), requiring sessions between Gaurav and Shake for efficient knowledge transfer. ## Error Resolution Bottlenecks Systemic issues in data flow between DSS and UBM causing recurring errors, with proposed solutions: - **Root cause analysis**: Errors stem from incorrect/incomplete assumptions in existing builds, requiring technical and product-level investigations. - **Resource constraints**: Current team capacity (Faisal/Shake) is insufficient for rapid fixes; adding Gaurav aims to accelerate diagnosis and resolution of high-volume errors. - **Ownership clarity**: Differentiating between DSS-owned errors (e.g., data mapping) vs. UBM/Ops-owned issues to avoid misalignment in accountability. ## Reporting Consolidation and Strategy Overhauling reporting mechanisms to reduce redundancy and improve visibility: - **Standardizing reports**: Creating unified dashboards merging DSS and UBM data to eliminate manual report generation and outdated data issues. - **Prioritization**: Limiting reports to critical KPIs (e.g., error backlog trends) instead of ad-hoc requests; sunsetting unused reports via a centralized Confluence repository. - **New reporting needs**: Developing a version-controlled progress report to distinguish *new* errors from *legacy backlog*, accurately reflecting resolution impact. ## Project Management Framework Aligning Carrie’s PM initiative with practical execution: - **Scope definition**: Focusing on 3-5 core October priorities (e.g., integrity error reduction, report consolidation) instead of broad goals like "fix UBM." - **Transparent tracking**: Tagging tasks to specific owners/teams with clear deadlines (e.g., "Document error-code resolutions by October 12th"), avoiding ambiguous responsibilities. - **Blockers escalation**: Formalizing processes to flag items requiring cross-team input (e.g., mock-bill logic gaps) in Jira, preventing circular dependencies. ## Technical Debt and Process Improvements Addressing systemic inefficiencies impacting long-term scalability: - **Error-code standardization**: Collaborating with Ops/Jay to define resolution protocols per error code (e.g., bulk-fix eligibility), reducing research overhead. - **Automation gaps**: Automating recurring data pulls (e.g., mock-bill balance discrepancies) instead of manual weekly queries. - **Upstream/downstream alignment**: Proactively assessing new integrations (e.g., Arcadia replacing OpenAI) to prevent replicating existing data-flow issues.
Review Mapping Location Logic
Constellation bills
Backlog status and updates
## Summary ### Current Operational Challenges The team is facing significant backlog issues requiring immediate attention. Key challenges include: - **Critical backlog situation:** Operations are currently overwhelmed, with teams struggling to meet expectations and established metrics. - **Resource constraints:** Hiring additional personnel (e.g., 20 more people) is not viewed as a viable or sustainable solution to the current workload. - **Risk of regression:** A single day of slippage could significantly worsen the backlog, making consistent daily effort crucial to prevent further setbacks. ### Team Management & Accountability Ensuring effective team oversight is paramount to resolving the backlog and preventing recurrence. The focus is on: - **Proactive management:** Leaders must actively track team performance to ensure efficiency and effectiveness in meeting goals. - **Ownership of roles:** Individuals are expected to clearly articulate their responsibilities and their team's contributions towards overcoming the current challenges. - **Preventing future issues:** A core objective is implementing management strategies that prevent the current backlog situation from happening again. ### Strategic Approach & Outlook The strategy involves a concerted, short-term push leveraging all available resources with a defined target timeline: - **All-hands-on-deck mobilization:** Every resource must be fully engaged and focused on clearing the backlog through the immediate crisis period. - **Creative problem-solving required:** The team must explore unconventional methods to overcome the current workload surge on the platform. - **Targeted resolution timeline:** Confidence is expressed that operations can return to a normal range by the end of October, implying the next 30+ days are critical. - **Holistic review planned:** Once the immediate backlog is cleared, a broader review encompassing systems, processes, and personnel alignment will be conducted to ensure long-term stability. ### Leadership Perspective & Urgency Leadership emphasizes the gravity of the situation while acknowledging team efforts: - **Appreciation for effort:** Recognition is given for the team's hard work, including late hours, dedicated to restoring service levels. - **Heightened urgency:** The current period is described as being "in the middle of the storm," requiring maximum focus and effort from everyone. - **Confidence in overcoming:** Despite the challenges, there is a strong belief that the team will successfully navigate this period and look back on it as a significant achievement by November.
DSS Priorities Sync
## Summary The discussion centered on addressing critical operational challenges, particularly concerning error resolution scalability and process gaps. Key themes included prioritizing bill-pay issues (especially life bills) while managing emerging R&D items like mock bills-a manual process where bills are estimated proactively to avoid utility shut-offs. This approach creates reconciliation complexities and accounts for ~80% of service address errors. The conversation highlighted systemic risks, including: - **Location mapping mismatches**: Significant discrepancies between addresses in bills and UBM location IDs (e.g., 1,000 invoices for PPG with only 137 matches), requiring fuzzy-matching scripts and manual intervention. - **DSS logic gaps**: Missing operational workflows historically handled manually by Ops, now causing bulk errors during automated processing. - **Resource constraints**: Upcoming medical leave for a key data team member risks institutional knowledge loss, emphasizing the urgency to document processes and automate queries. Scalability concerns were underscored, noting that manual workarounds (e.g., mock bills) are unsustainable for new customers. The inflexibility of UBM’s platform necessitates preprocessing logic before import to avoid overwhelming Cognizant’s dev team with data repairs. ## Wins - Backlog reduction below 5,000 errors through coordinated efforts. - Identification of mock bills as a primary error source, enabling targeted investigation. - Development of fuzzy-matching scripts to partially automate location mapping. ## Issues 1. **Mock bill proliferation**: - Manually created bills cause ~500-600 service address errors monthly. - Lack of clarity on volume, creation logic, and reconciliation procedures. - Risks customer shut-offs if not addressed systematically. 2. **Location mapping failures**: - Severe address mismatches between UBM location IDs and bill data (e.g., PPG’s 1,000 invoices mismatched to 300 locations). - No predefined logic to handle variations (e.g., "102 Highway" vs. "102 HWI"). 3. **DSS/UBM integration gaps**: - Missing operational logic (e.g., service address handling) not captured in DSS. - UBM’s inflexibility forces post-import fixes instead of preprocessing. 4. **Resource vulnerabilities**: - Key data team member’s imminent medical leave threatens continuity. - Undocumented output service code blocking meter serial data, risking unresolved errors. ## Commitments - **Mock bill analysis**: Obtain a bulk report on mock bills and operational plan from Ops before Ruben’s leave. *Owner: Data team* - **Output service access**: Draft a precise emergency request for critical code/files to address blocking issues. *Owner: Engineering lead* - **Process documentation**: Ensure comprehensive documentation of all manual fixes and logic upon the developer’s return. *Owner: Engineering team* - **Pre-import logic development**: Implement DSS-side conditional rules for location/vendor mapping to reduce UBM import failures. *Owner: Engineering team*
Plan for Arcadia
## Current Operational Challenges and Data Management Issues Significant difficulties exist in managing utility bill data and credentials internally, causing recurring problems with missing data and process failures. The team struggles with incomplete bill retrieval, where expected monthly bills don't materialize, requiring manual intervention to track down discrepancies. Credential management is particularly problematic, as the current team lacks specialized experience, leading to enrollment errors and delayed bill processing. These issues are exacerbated by reliance on outdated systems like the legacy data services platform, which urgently needs replacement. ## Arcadia's Proposed Solution and Service Model Arcadia offers a hybrid outsourcing model for utility data management, with two core services: - **Data ingestion** at ~$0.87-$0.97 per meter monthly, providing automated bill retrieval with 95% utility coverage (power/gas only). - **Credential management** at $3 per unique username/password setup (one-time) + $0.75 monthly maintenance per credential. This covers portal access setup and monitoring, with shared credential repositories for hybrid control. Key advantages include faster historical data delivery (14 days vs. 30-45 previously) and scalability for growing meter volumes. However, water utilities and non-standard commodities are excluded, requiring in-house handling. ## Cost-Benefit Analysis and Hybrid Approach Financial projections show substantial savings versus current operations: - **At 30,000 meters**: ~$792k annually for full Arcadia outsourcing (data + credentials) vs. $1.7M in-house costs. - **At 100,000 meters**: ~$1.1M with Arcadia, leveraging economies of scale. A hybrid model is proposed where Arcadia handles bulk data ingestion and credential setup, while internal teams: - Manage exceptions and missing bills (e.g., 5% gap in utility coverage). - Oversee QA and vendor performance, ensuring SLAs are met. - Retain web downloads for non-covered utilities or complex accounts. ## Operational Risks and Mitigation Strategies Critical concerns emerged about Arcadia’s ability to handle edge cases: - **Data gaps persist**: Even with automation, 3-5% of bills may require manual retrieval due to portal errors or uncovered utilities. - **Integration challenges**: Arcadia’s output must align with internal systems (e.g., location mapping for billing), risking reconciliation bottlenecks. - **Vendor limitations**: Their "set-and-forget" model lacks proactive issue resolution; clients must drive reconciliation for missing data. Mitigation includes contractual SLA penalties for service lapses and maintaining an internal "SWAT team" for exception handling. ## Evaluation and Next Steps Consensus favors advancing negotiations with Arcadia, pending resolution of key operational questions: - Clarify accountability for integration failures (e.g., mismatched location data in billing outputs). - Stress-test their response to real-world scenarios (e.g., credential lockouts, urgent bill retrievals). - Finalize meter-volume commitments to lock in tiered pricing. A dedicated session with Arcadia is planned to address these points, with strict confidentiality around the potential transition.
DSS Check-in
## Summary ### Understanding Mock Bills and Their Processing Mock bills are temporary invoices created to ensure timely payments when actual bills are unavailable or backlogged. - **Purpose**: Used during backlogs or when invoices are missing, allowing payments to proceed with estimated amounts. - **Reconciliation process**: Actual bills are later matched against mock bills, adjusting for any discrepancies (e.g., differences in amounts result in corrections). - **Freezing accounts**: Mock bills freeze accounts to prevent duplicate payments until the actual bill arrives. ### Addressing Errors in Mock Bills Errors related to mock bills often stem from missing service addresses or validation issues. - **Error 2020**: Occurs when service addresses are missing from bills derived from mock templates. - **Resolution**: Service addresses from prior bills are inherited by mock bills, but subsequent actual bills lacking this data trigger freezes. - **Validation improvements**: Emphasized the need to flag mock bills clearly in reports to avoid confusion during audits. ### System Identification of Mock Bills Methods to identify mock bills within the system were clarified. - **Bill source flag**: Mock bills are tagged with a "mock" label in the bill source column. - **Bill chain visibility**: Clicking a mock bill’s ID in Power BI reveals its status and linked account freezes. - **PDF annotations**: Mock bills include visible markers (e.g., "MOCK") on generated PDFs for easy recognition. ### Processing Queues and Priorities The team discussed managing high-volume billing queues, particularly for Alta Fiber accounts. - **Queue status**: ~2,458 bills remained in the queue, with batches processed at 10 bills per cycle to optimize throughput. - **Progress update**: ~2,500 bills were cleared earlier in the day, demonstrating improved efficiency in DSS processing. - **Challenges**: No current method to prioritize specific accounts in the queue, necessitating manual oversight for urgent cases. ### Logistical Coordination for Follow-Up Final discussions focused on operational logistics for upcoming tasks. - **Team availability**: Confirmed key personnel would be present to assist with office access and system troubleshooting. - **Communication protocols**: Shared contact details to resolve issues if primary contacts are unavailable during critical processing windows.
DSS/UBM sync
## Summary ### Team Introductions and Onboarding New team members were introduced to support the Utility Bill Management (UBM) and carbon accounting products, focusing on bridging knowledge gaps between existing systems. Key onboarding activities include: - **Access provisioning**: Pending email setup for new members to enable access to FDG, Constellation, Teams, Slack, and Azure development environments. - **Technical familiarization**: New members are currently exploring Appsmith, Dromo, Power BI, and OCR technologies while awaiting codebase access. - **Integration strategy**: Initial involvement will center on observing daily standups and project workflows to build context before active contribution. ### Data Services (DSS) and UBM Integration Challenges A critical misalignment exists between DSS outputs and UBM's input expectations, prompting architectural adjustments: - **Middle-layer solution**: Development of an intermediary component to transform DSS data into UBM-compatible formats without modifying core DSS logic. - **Preservation of DSS integrity**: Avoidance of fundamental DSS changes due to UBM's evolving requirements and non-standardized assumptions. - **Scope definition**: The middle layer's longevity (temporary vs. permanent) remains undetermined pending further system analysis. ### Bill Processing System Optimization Urgent improvements are needed for the LR 2020 bill processing pipeline currently hampered by performance bottlenecks: - **Throughput increase**: Processing rate raised from 4 to 8 bills per 5-minute batch to address a backlog of 2,350+ bills. - **Performance monitoring**: Implementation of hourly checks to validate system stability under increased load. - **Backlog composition**: Identification of mock bills within the queue requiring potential filtering or special handling. ### Data Validation Issues Systematic errors in bill validation require targeted fixes across multiple fields: - **Meter serial number defects**: Output service failures occur when serial numbers contain strings instead of numerical values, necessitating DSS code updates to handle data type variations. - **Service address mapping failures**: Missing service address data triggering validation errors (code 2020), with root causes under investigation. - **Location mapping errors (code 3018)**: Occur when critical fields (account ID, meter ID, commodity type) lack location associations, requiring comprehensive cause analysis. ### Technical Debt and System Access Outstanding administrative and technical issues impacting workflow efficiency: - **Azure function access**: Resolution of permissions constraints blocking access to observation type configurations in DSS production functions. - **Legacy mock data**: Proliferation of test bills created during payment window simulations requires cleanup strategy. - **Cross-team dependencies**: Coordination needed with carbon accounting specialists for domain-specific context transfers. ### Strategic Priorities Immediate focus areas for the team include: - **Bill processing acceleration**: Maximizing throughput while ensuring system reliability through incremental batch size adjustments. - **Validation error resolution**: Parallel efforts to address meter serial number defects and service address/location mapping failures. - **Middleware foundation**: Preliminary design work for the DSS-UBM translation layer pending new team member integration.
Postgres TLS Troubleshooting
## Application Gateway and PostgreSQL Connectivity Issue The team is troubleshooting a connectivity failure between Azure Application Gateway and PostgreSQL, where the gateway initiates a TLS handshake but receives no response from the database. - **Issue diagnosis**: Packet captures confirm SYN packets reach PostgreSQL, but no acknowledgment is returned, indicating a breakdown in the TLS handshake phase. - **Configuration context**: - The backend pool uses the PostgreSQL FQDN (`[surname.postgresql.azure.com](http://surname.postgresql.azure.com)`), with a NAT gateway providing static egress IP allowed in the firewall. - Custom DNS is configured, though DNS resolution has been ruled out as the root cause. - **Escalation path**: Microsoft Support is engaged, with plans to involve the PostgreSQL team to investigate the unresponsive "hello" handshake. - -- ## Data Ingestion Process from GCP to Azure Data migration from Google Cloud Platform (GCP) to Azure Blob Storage is now operational using rclone after AZ Copy encountered compatibility issues. - **Implementation details**: - A VM in GCP runs rclone to transfer large datasets directly to Azure Blob Storage, leveraging retry mechanisms and batching for reliability. - The solution bypasses earlier Data Factory limitations, though the transfer remains ongoing due to data volume. - **Status**: The workflow is stable, with no immediate blockers reported. - -- ## AKS Deployment Pipeline Setup Preparing Azure Kubernetes Service (AKS) deployment pipelines is critical to unblock development teams, targeting completion by Thursday. - **Requirements**: - Validating permissions and access for container registry deployments to AKS. - Establishing individual pipelines for each application component to enable development workflows. - **Dependencies**: Success hinges on resolving networking/access issues promptly, with a steering committee milestone on the 20th driving urgency. - -- ## Application Gateway Upgrade Initiative A large-scale upgrade of 70+ Application Gateways from v1 to v2 is in progress across the Azure environment. - **Migration strategy**: - Building parallel v2 environments with identical settings, including Global Server Load Balancing (GSLB) and F5 front-end configurations. - Rigorous traffic flow validation precedes cutover to ensure zero service disruption. - **Scope**: This enterprise-wide effort prioritizes modernization ahead of v1 end-of-life. - -- ## HTTP Application Gateway for AKS An additional Application Gateway dedicated to HTTP traffic for AKS was identified as a pending requirement. - **Purpose**: The gateway will manage ingress traffic for AKS-based applications, complementing existing infrastructure. - **Next steps**: Configuration details from prior tickets will be revisited to expedite implementation.
Validation Rules Alignment
## Meeting Context and Objectives The meeting focused on resolving confusion around bill validation rules, specifically addressing the concept of "muting" non-critical errors to streamline payment processing. Key goals included clarifying terminology, aligning on business priorities, and confirming current system behavior to prevent miscommunication within the team. ### Background and Urgency - **Immediate operational pressure**: Significant bill payment delays and reconciliation issues for prepay customers were reported, with AP file inaccuracies causing month-end reconciliation failures. - **Terminology confusion**: The term "muting" lacked a shared definition-debated whether it meant disabling validations entirely or auto-resolving errors while retaining audit trails. ## Business Priority Alignment - **Critical focus areas**: Bill payments (ensuring timely payments to avoid utility shutoffs) and AP file accuracy (for accounting integration with platforms like Oracle) were identified as top priorities. - **Impact assessment**: While 95% of bill payments function despite validation errors, over 70% of AP files for prepay customers are currently affected by unresolved data discrepancies. ## Validation Rule Implementation - **Auto-resolution framework**: All validation errors except seven specific critical rules are automatically resolved by the system, eliminating manual intervention for non-blocking issues. - **Edge case handling**: Bills with multiple errors (including one of the seven critical errors) display legacy errors until reparsed, after which only critical errors persist. ### Critical vs. Non-Critical Rules - **Non-critical rules**: Primarily affect analytical data rather than payment/AP file functionality (e.g., meter serial number mismatches or service-day miscalculations). - **Critical rules**: Seven undisclosed validations remain active to prevent payment failures or AP file corruption, requiring manual resolution. ## Mock Bill Process - **Purpose and urgency**: Mock bills are created manually when physical utility bills are delayed, enabling on-time payments to avoid service disconnections during tight payment windows (e.g., 15-day deadlines). - **Validation parity**: Mock bills undergo identical validation checks as physical bills, occasionally triggering errors (e.g., virtual account mismatches) despite streamlined creation workflows. ## Technical Deep Dive: Error Resolution - **Days-of-service discrepancy**: A recurring error where billing period dates conflict with calculated service days. - **Root cause**: Systems inconsistently apply "end_date minus start_date" versus "end_date minus start_date plus 1" formulas. - **Solution proposal**: Enhance BSS/UVM integration to flag whether service days are provider-sourced or system-calculated. - **Observation codes**: Needed from DSS to resolve 2007/2008 errors, enabling automated fixes for line-item inaccuracies without bulk data operations. ## Communication and Next Steps - **Clarification directive**: The team will formally confirm that only seven validation rules remain active, with auto-resolution handling all other errors. - **Data dependency**: A list of DSS observation codes is required to address residual data inconsistencies affecting AP file reconciliation.
Bill Error Validation
## Invoice Processing Challenges and System Visibility The meeting focused on persistent issues in invoice processing, particularly discrepancies between line items and totals, misattributed accounts, and manual intervention bottlenecks. A key proposal involved creating **two distinct processing "swim lanes"** to differentiate between fully automated DSS workflows and invoices requiring manual handling. This would improve root cause analysis by clarifying whether errors stem from technical failures or human input, reducing miscommunication between teams. ### Error Code Analysis and Prioritization - **Error Code 2020 (Missing Data Fields):** Investigated missing meter serials and service addresses, revealing four root causes, including gaps in post-DSS logic (e.g., falling back to commodity type if meter data is absent). - **Error Code 3019 (Charge Discrepancies):** Addressed usage/price deviations exceeding historical thresholds. Proposed muting non-critical alerts (e.g., charges 15% higher than prior bills) to reduce operational noise, as these often lack actionable follow-up. - **Spreadsheet-Driven Tracking:** A unified spreadsheet is being developed to categorize errors by origin (DSS, Ops, UBM) and prioritize fixes, aiming to streamline troubleshooting for 5,000+ backlogged invoices. ### Mock Bill Reconciliation Challenges Discrepancies in mock bill dates/amounts complicate matching them to actual invoices (e.g., mock bills covering 9/1-9/30 lacking corresponding real invoices). This creates ambiguity in applying payments and requires manual tracing, slowing resolution. ### Systemic Solutions vs. Manual Fixes - **Permanent Fixes Over Band-Aids:** Emphasized resolving root causes (e.g., LLM missing invoice detail sections) rather than daily manual corrections, which risk recurring issues. - **Data Integrity Focus:** Highlighted the need to mute non-essential alerts (e.g., "usage over by 10%") system-wide to free bandwidth for critical integrity checks. ### Operational Efficiency Improvements - **Audit Trail Visibility:** Proposed enhancing UBM reporting to track invoice pathways, reducing time spent tracing manual/automated hops (currently 30-45 minutes per case). - **Cross-Team Alignment:** Stressed synchronizing Dev and Ops teams to audit error patterns collaboratively, avoiding redundant emails/meetings. The discussion underscored balancing urgent backlog clearance with structural fixes to prevent future bottlenecks, particularly as new invoices compound existing issues daily.
Meter Serial Logic Review
## Summary ### Meter Serial Number Logic in UBM The meeting clarified rules for handling meter serial numbers in utility bill management systems. When no meter serial exists, the system defaults to the commodity type (e.g., "FIREPROTECTION" without spaces). Commodity types like refuse, recycling, and propane typically lack meter serials, though rare exceptions occur where vendors assign them unexpectedly. For any numerical value present-including "0" or "1"-the exact meter serial from the bill must be captured verbatim. - **Commodity type defaulting**: Ensures consistent labeling when meter data is absent, using standardized all-caps formatting without spaces. - **Exception handling for uncommon commodities**: While refuse/recycling rarely include meter numbers, propane deliveries occasionally feature them, requiring flexible validation. - **Numerical value capture**: Any digit sequence listed as a meter number on the bill is prioritized, even if non-standard, to maintain data accuracy. ### Commodity-Specific Meter Serial Exceptions Certain utility types were identified as high-risk for missing meter serials, necessitating tailored validation. Refuse, recycling, and propane services almost never include meter numbers, but sporadic vendor practices can introduce anomalies. - **Refuse/recycling consistency**: These services exhibit 99.99% absence of meter numbers, making them prime candidates for automated commodity-type fallback. - **Propane delivery variability**: Though typically meter-free, occasional vendor-assigned serials require safeguards to prevent LLM (language model) oversight during processing. ### Complex Meter Block Processing Electric bills with multiple meter blocks (e.g., delivery vs. supply charges) pose extraction challenges. The LLM struggles to consolidate usage data from line items under a single meter, often creating duplicate entries instead. - **Multi-meter billing issues**: For accounts with separate delivery/supply charges, the LLM fails to associate usage with a unified meter, generating redundant meter records. - **Solution approach**: Manual review by operations teams combined with phased LLM training was proposed, alongside collecting real-world examples to refine detection logic. ### Output File Access and Debugging Access to FTP shares and output files was demonstrated to enable independent troubleshooting of meter serial discrepancies. The process involves navigating specific network paths via the Ops DBAS desktop to locate CSV output files. - **FTP directory structure**: Key folders include production output for finalized data and upload directories for batch processing, with detailed path sharing for team access. - **File retrieval workflow**: Outputs are searchable by batch name/completion date, though current system limitations require copying files externally for CSV viewing. ### Bill Interpretation Challenges Specific bill examples highlighted complexities in mapping service addresses, meter blocks, and charges. One case showed a bill with mixed commodity types (electric vs. lighting), where only the primary service included a meter number. - **Service address identification**: Clarified using unit-specific details (e.g., "2817 West End Ave. Unit 107") when bills list multiple locations. - **Commodity differentiation**: Lighting line items lack meter numbers and default to commodity types, unlike electric services with explicit meter serials. ### Prior Balance and Late Fee Handling Bills with ambiguous prior balance labeling (e.g., informational vs. actual late fees) cause LLM inconsistencies. Some vendors display hypothetical late fees in headers rather than itemized charges, leading to extraction errors. - **Prior balance ambiguity**: Distinguishing between real overdue amounts and informational warnings is critical, as misclassification skews charge totals. - **LLM performance variability**: Detection accuracy for late fees is inconsistent across vendors, with some requiring exclusion from automated processing due to non-standard formatting. ### Debugging Strategy and Next Steps The team prioritized creating systematic approaches to track meter serial failures and streamline issue resolution. Key actions include deciphering output file naming conventions and developing failure reports. - **Naming convention analysis**: Understanding batch ID patterns will accelerate output file retrieval without relying on manual searches. - **Documentation and reporting**: Plans include building error-tracking reports and maintaining detailed logs to catalog recurring issues and solutions.
Review HubSpot Issue Pipeline
## Customer The customer operates in a post-sale support environment requiring coordination across multiple internal teams (CSMs, UBM Ops, Data Services, and Dev) to resolve ongoing client issues. Their workflow currently relies heavily on untracked email communication, leading to visibility gaps in issue resolution status, assignment ownership, and historical tracking. The core need is transitioning from fragmented email threads to a structured, auditable system for managing customer-reported problems. ## Success The most significant achievement discussed is the design of a **centralized ticketing pipeline** within HubSpot. This system will enable: - **Automated ticket creation** via CSM submissions or customer-facing forms - **Categorization of issues** into predefined types (e.g., billing, technical) with "Other" flexibility - **Team-based assignment workflows** ensuring clear ownership across CSM, Ops, Data Services, and Dev teams - **Status transparency** through defined stages like "In Progress," "Ready for CSM Review," and "Resolved" ## Challenge The primary obstacle is **designing a flexible yet standardized workflow** that accommodates: - Variable resolution paths (single-team vs. multi-team handoffs) - Maintaining context during ticket transitions between teams - Integrating customer-submitted forms with internal ticket creation - Future-proofing for Jira integration without immediate implementation - Balancing granular categorization ("top 5 issue types + Other") with reporting needs ## Goals Key objectives for the solution include: - **Eliminating email dependency** through HubSpot-based ticket creation - **Implementing assignment logic** based on issue type and customer-CSM mapping - **Enabling cross-team collaboration** with resolution notes and audit trails - **Building reporting dashboards** for issue trends, team performance, and SLA compliance - **Developing customer-facing form integration** within the product interface - **Phased rollout**: CSM ticket creation first, followed by customer submission capabilities - **Attachment support** and **escalation mechanisms** for urgent issues - **Historical tracking** of all customer interactions per account
Meter Serial Logic Review
## Summary ### Meter Serial Data Capture Challenges The meeting focused on addressing issues with meter serial data capture in the system, particularly for the "lightning" commodity. Key problems included missing meter serials in production blocks and inconsistencies in how the system handles fallback logic when meter data is absent. - **Missing meter serials**: For the "lightning" commodity, meter serials were frequently absent in production data, causing processing failures. Operations (OPS) suggested using the commodity name (e.g., "lightning") as a fallback when meter serials are unavailable, but this triggered additional errors in downstream systems. - **System behavior gaps**: The Data Storage System (DSS) implemented logic to auto-populate missing meter serials with commodity names, but the Utility Billing Module (UBM) rejected these entries, potentially due to expecting numerical or mnemonic values instead of string-based inputs. ### Root Cause Analysis Four core issues were identified, driving the need for comprehensive documentation and targeted fixes: - **Legitimate data capture failures**: The system failed to extract meter serials present in source documents (e.g., bills), requiring improved pattern recognition. - **Absent meter serial edge cases**: Certain commodities (e.g., fire protection) never include meter serials, necessitating permanent fallback to commodity names. - **Inconsistent fallback logic**: For commodities like "lightning" that occasionally lack meter serials, the system lacked standardized rules to default to commodity names. - **UBM validation failures**: Even when commodity names were substituted for meter serials, UBM rejected entries due to format mismatches, indicating deeper integration issues. ### Solution Framework A phased approach was proposed to resolve these issues, prioritizing immediate fixes and long-term logic refinement: - **Immediate review of error cases**: 37 specific instances of meter serial errors (from a 2020 dataset) will be audited to categorize failures (e.g., extraction gaps vs. fallback needs). Each case will document: - Whether the meter serial was present but missed by the system. - Commodity-specific patterns requiring fallback rules. - **LLM and DSS enhancements**: - Improve the Language Model (LLM) to accurately capture meter serials from source documents. - Refine DSS logic to auto-populate commodity names only for predefined edge cases (e.g., commodities that never have meter serials). - **UBM compatibility testing**: Investigate why string-based meter serials (e.g., "lightning") fail UBM validation, exploring solutions like data transformation or configuration updates. ### Data Validation Strategy A hands-on validation process was defined to ensure accurate issue categorization and resolution tracking: - **Error log analysis**: The team filtered a dataset using the "error message" column to isolate 37 meter serial-related failures for manual review. - **Findings documentation**: Each case will include: - Bill IDs for reprocessing validation. - Classification into one of the four root causes (e.g., "LLM extraction failure" or "fallback logic gap"). - **Reprocessing workflow**: After fixes are implemented, bills will be reprocessed to verify if meter serials are now correctly captured or substituted. ### Forward Plan Next steps emphasize iterative testing and cross-team alignment to prevent recurrence: - **Prioritize 2020 error set**: Address all 37 meter serial issues first, then expand to other error types in the dataset. - **Collaboration with OPS**: Consult OPS teams to identify common meter serial failure patterns and validate fallback logic for high-risk commodities. - **Documentation protocol**: Capture all implemented logic (e.g., "If commodity type = fire protection, default meter serial = fire protection") in a central repository for future reference and auditing. - **Systematic reprocessing**: After initial fixes, reprocess bills to measure success rates and identify residual gaps before broader deployment.
UBM Errors Check-in
## Summary ### Upcoming Meeting with Simon The meeting began with uncertainty about an 11:00 AM session involving Simon, a large mall owner requiring integration support. Initial confusion arose over attendee inclusion, particularly whether a key team member (Jay) was invited. After reviewing the invite details, it was determined the meeting focused solely on Constellation Navigator for UBM (a specific client), and attendance might be necessary pending further clarification from internal contacts. ### Q4 Timeline and CSM Alignment A review of the Q4 timeline was conducted with Carrie (Product Management) and Tara (Customer Success), focusing on deliverables from a Customer Success Manager (CSM) perspective. Key points included: - **CSM-driven priorities**: The timeline was assessed for feasibility, with emphasis on ensuring alignment between CSM requirements and the team’s technical capabilities. - **Ownership clarity**: Discussions highlighted the need to distinguish CSM responsibilities (e.g., user support) from product/process issues, especially to prevent CSMs from becoming bottlenecks for non-CSM tasks. - **Ticketing system urgency**: Implementing a formal ticketing system was prioritized to streamline communication, reduce ad-hoc requests (e.g., emails with attachments), and free up bandwidth for core development work. ### Bill Pay Process and Data Integrity Ensuring accurate data flow from invoice receipt to payment emerged as a critical Q4 objective to prevent service disconnections. Challenges included: - **Error backlog management**: Fixing data issues (e.g., meter serial number inaccuracies) requires a two-step approach: stopping future errors and reprocessing affected historical bills, with the latter lacking a scalable solution. - **System limitations**: The current setup cannot auto-reprocess bills, forcing manual intervention and creating operational delays. - **Interpreter layer proposal**: A middleware solution was suggested to translate DSS output into UBM-compatible formats, reducing the need for risky changes to UBM’s core logic. ### Error Resolution Strategy Addressing recurring errors (e.g., Code 3018-unmapped bill locks) demands structured processes: - **Documentation rigor**: Creating a living error-code repository with clear ownership, resolution steps, and timelines (e.g., "For Code 3018: CSMs manually map virtual accounts by Friday; Ruben implements UI fix by November 10"). - **Prioritization framework**: Focusing on high-volume errors first (e.g., 60% of meter serial issues are "low-hanging fruit"), while acknowledging some require long-term solutions. - **Cross-functional workshops**: Holding frequent sessions with Ops, CSMs, and developers to align on error specifics and avoid recurring backlogs. ### Reporting and Data Tracking New reporting requirements for FTG Connect (DSS and UBM) were discussed: - **Data gaps**: Certain reports necessitate tracking previously unmonitored metrics, requiring backend changes beyond simple Power BI visualizations. - **Workshop collaboration**: Internal CSM workshops will define report specifications, followed by joint sessions with the data team to ensure feasibility. - **Timeline awareness**: Emphasis on starting complex tracking early due to compressed timelines (e.g., year-end holidays affecting delivery). ### Team Onboarding and Access New engineers joining the team need accelerated integration: - **Access provisioning**: Urgent requests for read access to systems (e.g., UBM reporting databases) to familiarize them with existing workflows. - **Knowledge transfer**: Plans to include them in daily stand-ups and provide context via recorded sessions, with a focus on understanding system vernacular before tackling technical intricacies. - **Role boundaries**: Initially limiting their scope to FDG Connect-related tasks to avoid overwhelming them with UBM’s complexity. ### Long-Term Vision and Risks Strategic discussions centered on sustainable solutions for recurring issues: - **Location ID management**: For errors like unmapped service addresses (Code 3018), 20% could be auto-fixed via fuzzy matching, but 80% require manual CSM input-highlighting a need for UI-driven bulk-creation tools. - **Risk of inaction**: Without scalable fixes, error volumes could escalate (e.g., 6,000+ backlog entries), crippling operations. - **Leadership updates**: Planning an in-person briefing to showcase progress, risks, and forward strategy once immediate fires are contained.
DSS-UBM Alignment
## Summary ### Backlog Processing and System Improvements The team focused on addressing the September backlog, emphasizing the need for efficient processing and system enhancements. Key improvements include a new feature that verifies past due amounts by comparing current bills with previous charges, triggering errors for mismatches to enable auto-fixing. Bulk-loading capabilities are being prioritized to expedite backlog clearance, with testing underway for linking and closing functionalities. However, concerns were raised about operators handling past-due accounts without proper oversight, potentially leading to errors. ### Error Analysis: 2020 and 3018 Error 2020 (missing fields) and 3018 (compound errors) were identified as critical bottlenecks. - **Error 2020**: Primarily caused by absent service addresses or meter serial numbers. For example: - Constellation invoices often lack meter numbers or include incorrect placeholders (e.g., group numbers), causing mapping failures. - Service addresses are inconsistently formatted on bills-sometimes missing ZIP codes or differing from UBM location addresses. - **Error 3018**: A cascading issue where unresolved 2020 errors compound. Fixing service address gaps (e.g., auto-pulling ZIP codes from historical data) could resolve ~600 cases. ### Address and Meter Data Challenges Discrepancies between service addresses (on bills) and location addresses (in UBM) complicate mapping. - **Service Address Fallbacks**: When bills lack addresses, the system will default to previous invoices for the same account/meter. Edge cases (e.g., incomplete city/state) require manual review. - **Meter Serial Issues**: Utilities frequently change meter numbers, and some invoices lack them entirely (e.g., "lighting" or "sewer" labels). This forces manual intervention, as synthetic meter IDs disrupt automated mapping. ### Virtual Account Constraints and System Integration The virtual account construct-requiring five fields (vendor code, account number, service address, meter serial, control code)-is a core bottleneck. - **UBM Limitations**: UBM’s rigid dependency on all five fields clashes with DSS’s inconsistent data output. For instance, missing meter numbers (due to utility changes or artificial labels) break the mapping workflow. - **Proposed Solutions**: - An intermediate layer between DSS and UBM to translate data and handle mismatches (e.g., linking accounts with four fields if meter numbers change). - Softening UBM rules to allow partial matches, though this risks downstream impacts on billing and auditing. ### Strategic Adjustments and Follow-Up Long-term fixes require rethinking system interoperability and virtual account logic. - **Documentation Review**: The team will examine legacy FDG adapter documentation in Confluence (search: "virtual accounts") to understand integration assumptions. - **System Logic Overhaul**: Exploring automated account linking to reduce manual effort, especially for meter-number volatility. A deeper analysis of UBM’s virtual account framework is planned to align it with real-world data variability.
Constellation Navigator Migration to Microsoft Azure - Large-Scale Data Migration of data
## Data Transfer Challenges from Google Cloud to Azure The team encountered significant issues migrating 2.5 terabytes of data (comprising millions of files) from Google Cloud Storage to Azure Storage. ### Azure Data Factory Pipeline Failures The initial approach using Azure Data Factory (ADF) consistently failed after transferring ~800 GB of data. - **Integration runtime limitations**: The auto-resolve integration runtime struggled with the data volume, causing pipeline crashes without clear error resolution. - **Unreliable error handling**: Error messages were inconsistent (e.g., AWS-related errors appeared despite using GCP-Azure connectors), complicating troubleshooting. - **Public internet bottlenecks**: Data transfer occurred over the public internet, introducing latency and reliability issues for large-scale operations. ### Evaluation of Alternative Transfer Tools Discussions shifted toward more robust tools like **AZCopy** and **Azure Storage Mover** to address ADF's limitations. - **AZCopy advantages**: - Built-in retry logic ensures reliability during transient network failures. - SAS token authentication provides secure data transfer without routing through intermediary networks. - Efficient API utilization optimizes bandwidth usage for large datasets. - **Hybrid approach proposal**: Use AZCopy for initial bulk migration and ADF for incremental daily syncs (twice daily) to handle ongoing data writes from the live application. - -- ## Application Gateway Configuration for PostgreSQL TLS Efforts to terminate TLS for PostgreSQL client traffic using Azure Application Gateway faced protocol-specific hurdles. ### PostgreSQL TLS Handshake Incompatibility The gateway failed to interpret PostgreSQL’s unique TLS initiation sequence. - **Protocol mismatch**: PostgreSQL clients initiate TLS with an 8-byte "SSL request" packet, which Application Gateway doesn’t recognize (unlike HTTP-based TLS). - **Current workaround limitations**: Disabling backend TLS and relying on TCP-only communication proved ineffective, causing connection resets. ### Preview Feature Constraints and Alternatives A preview feature ("TCP TLS proxy") was evaluated but remains unsupported for PostgreSQL. - **SQL Server vs. PostgreSQL disparity**: While documented examples exist for SQL Server TLS termination, PostgreSQL’s protocol nuances lack official support. - **Pending GA release**: A preview feature enabling TLS termination for non-HTTP protocols is nearing general availability but wasn’t validated for PostgreSQL during the meeting. - **External tool references**: Solutions like *AG Proxy* and *HAProxy* were noted as alternatives that handle PostgreSQL’s TLS handshake correctly. - -- ## Next Steps and Resolution Paths ### Data Transfer Strategy - Prioritize **AZCopy** for initial data migration, leveraging its resilience for large-scale transfers over public internet. - Explore **Azure Storage Mover** as a secondary option if AZCopy encounters unforeseen constraints. ### Application Gateway Troubleshooting - Review existing Microsoft support tickets (referenced but not detailed in the meeting) for PostgreSQL-specific TLS issues. - Test the GA version of the TCP TLS proxy feature upon release to validate PostgreSQL compatibility. - Share SQL Server TLS termination examples as a reference for potential configuration adjustments. - -- ## Closing Coordination The transcript will be circulated to all participants for reference. A follow-up meeting is scheduled for Monday to review progress on the Application Gateway configuration and data migration status.
Review HubSpot Issue Pipeline
## System Issues and Fixes Addressing challenges with existing builds and processing systems to resolve operational bottlenecks. ### Reprocessing Existing Builds Identifying solutions for reprocessing legacy builds affected by unresolved issues like "do not use" tags and blank meter fields. - **Bulk update challenges**: Manual identification and reprocessing of thousands of affected accounts is impractical without automated tools or queries. - **Commodity type adjustments**: Converting problematic meters to standardized types (e.g., electric) for easier inactivation, though system constraints prevent direct updates. - **Legacy invoice impact**: Older invoices bypass reprocessing mechanisms, requiring targeted fixes to prevent recurrence in new builds. ### Web Download Helper Issues Resolving task reordering errors causing misallocated payments and processing delays. - **Incorrect upload consequences**: Tasks reordering during web downloads leads to invoices uploaded to wrong accounts, resulting in erroneous payments and reconciliation overhead. - **Processor workload impact**: New team members struggle with error identification, while experienced staff spend extra time verifying placements, reducing overall efficiency. ### Mapping and Integrity Check Failures Troubleshooting UBM (Utility Bill Management) mapping errors triggering integrity checks. - **Field validation gaps**: Key fields (e.g., commodity type, control code) lack standardized validation, causing batch ingestion failures and manual reprocessing. - **Blank line vulnerabilities**: CSV files with blank lines frequently fail ingestion, requiring manual review and resubmission by the operations team. - -- ## Data Services and UBM Operations Optimizing data flows between systems and improving issue resolution protocols. ### Observation Types and Historical Data Leveraging Azure functions to validate service item types and historical meter alignment. - **Real-time data validation**: Using `/docs` endpoint in Azure to pull live service item descriptions, ensuring consistency with BSS/IPS requirements. - **Historical meter discrepancies**: Legacy systems show partial data overlap (e.g., 3 of 7 line items matched), causing automated corrections to skip unmapped entries. ### UBM Error Management Streamlining handling of batches rejected during ingestion. - **Manual reprocessing workflow**: Failed batches emailed to the team are manually re-uploaded, reprocessed, and resent to UBM, focusing on blank-line and commodity mismatches. - **Escaped validation issues**: Some bills with known problems (e.g., "do not use" tags) bypass "ready to send" checks, requiring deeper system audits. - -- ## Reporting and Dashboard Enhancements Developing consolidated reports to improve operational visibility and decision-making. ### Download Activity Monitoring Creating unified reports to track invoice download timeliness and task management. - **Task lifecycle tracking**: Monitoring creation, snooze, completion dates, and assignees to identify bottlenecks in download workflows. - **Vendor/customer metrics**: Including vendor names, customer IDs, and time-to-download metrics to measure efficiency gaps (e.g., 30-day delays). ### UBM Operator Performance Reporting on issue resolution efficacy within UBM processing. - **Validation resolution tracking**: Capturing operator names, bill IDs, validation check IDs/names, and resolution dates to audit fix timelines. - **Error trend analysis**: Aggregating recurring error types (e.g., mapping failures) to prioritize systemic fixes over one-off corrections. ### System Status Reporting Revitalizing the system status dashboard for real-time health monitoring. - **Jira integration**: Linking HubSpot/Zendesk tickets to development workflows to automate issue tracking and resource allocation. - **Data source limitations**: Current reliance on manual data exports hinders scalability; live connections to UBM/DSS data are needed. - -- ## Process Improvements and Integrations Upgrading customer-facing pipelines and security protocols. ### Customer Issue Pipeline Implementing HubSpot for end-to-end customer issue management. - **Phase 1 (Manual)**: Customer Success Managers create tickets directly in HubSpot, replacing ad-hoc tracking methods. - **Phase 2 (Automated)**: Customers submit issues via email/forms to auto-generate tickets, with dashboards for real-time monitoring. ### Onboarding Pipeline Migrating Gantt chart-based onboarding to HubSpot for collaborative tracking. - **Centralized step management**: Replacing Excel with dynamic fields to log stages, owners, and deadlines across all customer onboardings. - **Audit trail requirements**: Tracking step changes and timelines to identify process delays or resource constraints. ### Multifactor Authentication (MFA) Centralization Addressing credential management risks for utility portal access. - **Decoupling from personal devices**: Removing MFA dependencies on employee phones to prevent operational disruptions during turnover. - **Central credential repository**: Researching software solutions to securely store and rotate portal credentials without manual intervention. - -- ## Technical Access and Dependencies Resolving critical knowledge gaps blocking system maintenance. ### Virtual Machine Code Access Securing ownership of proprietary code supporting essential services. - **Deployment visibility gap**: The service’s deployment environment and dependencies are undocumented, risking operational continuity during absences. - **Access negotiation**: Coordinating with the original developer to share code repositories and setup documentation for team-wide access.
DSS Check-in
## Power Factor Ticket Progress Recent updates on the Power Factor ticket reveal partial progress, with changes deployed to the development environment. Initial tests showed successful invoice processing, but a secondary review is pending from Aftran to validate accuracy. Early observations suggest some bills (e.g., the first bill ID mentioned) were manually processed, highlighting potential gaps in automation. Additional examples were recently added to the ticket for further investigation. ## Reprocessing Strategy for Invoices A systematic approach to reprocessing error-affected invoices is under discussion. Key considerations include: - **Error identification**: Current systems lack automated error feedback from UBM, making it difficult to flag bills needing reprocessing. - **Reprocessing mechanics**: Reprocessing via DSS may generate new bills rather than overwriting existing data, requiring clarity on UBM’s data ingestion process (e.g., whether files are pulled from storage accounts). - **Criteria for selection**: Proposals include filtering bills by vendor, account, or error type, but no existing framework tracks these parameters. ## Clarification on Observation Types The term “observation types” was briefly raised but remains unresolved. Initial speculation linked it to “usage charges” or similar billing categories, but further input from stakeholders like Go Op Consumer is needed to define its scope and relevance to ongoing workflows. ## Next Steps and Task Prioritization Focus areas for upcoming work include: - **Ticket prioritization**: Leveraging the *build validation filter* to streamline DevOps tasks and reduce backlog clutter. - **Collaborative review**: Aligning with ops teams to validate fixes before reprocessing batches of invoices. - **Process documentation**: Building a framework to systematically identify, track, and reprocess error-prone invoices as part of future error resolution. ## Technical and Workflow Challenges Critical gaps were identified in: - **Error feedback loops**: Reliance on manual checks due to insufficient automated alerts from UBM. - **Data ingestion clarity**: Unclear whether reprocessed bills create new entries or update existing ones in UBM’s systems. - **Tooling limitations**: No existing method to generate lists of bills requiring reprocessing based on specific criteria (e.g., error type, date range).
Catch up on UBM Project
## Meeting Purpose and Context The meeting focused on establishing a structured approach to manage overlapping workstreams and dependencies between teams, particularly concerning the Utility Build Management (UBM) platform and Data Services (DSS). The primary goal was to create a unified timeline for deliverables, prioritize tasks, and define a process for documenting requirements to avoid inefficiencies. Emphasis was placed on automating manual processes, aligning with leadership expectations for year-end completion targets, and ensuring cross-team accountability. ## Challenges in Current Workflows Key pain points were identified around unclear requirements and fragmented communication, causing delays and resource bottlenecks. - **Unstructured requirements**: Teams often submit vague problem statements (e.g., "getting reports for dev fixes") without specifying columns, calculations, or examples, leading to back-and-forth delays. - **Lack of documentation**: Manual processes lack standardized guidelines, making it difficult to assign tasks or track dependencies. For instance, corrections in the platform lack clear instructions on *where* or *how* to implement changes. - **Shifting priorities**: Daily changes to problem statements disrupt planning, especially with new discoveries during development. ## Process Optimization Strategies A standardized intake process was proposed to streamline task management and clarify expectations. - **Requirement templates**: Implementing a templatized system for requests, forcing specificity (e.g., defining exact data columns, use cases, and screenshots) to eliminate ambiguity. - **Centralized tracking**: Using JIRA as a single source of truth for timelines, with color-coded swim lanes (e.g., blue for dev-dependent tasks) to visualize interdependencies. - **Proactive prioritization**: Focusing first on October’s urgent deliverables, particularly those blocking other teams (e.g., Tara’s dependencies), to demonstrate quick wins. ## Coordination and Accountability Framework Cross-team alignment will be enforced through structured meetings and clear ownership. - **Dedicated collaboration sessions**: Scheduling daily 1-hour meetings with key stakeholders (e.g., Tara’s team) to flesh out requirements, with recordings for asynchronous follow-up. - **Role clarity**: Product owners (e.g., UBM’s BA) will consolidate requirements, while dev teams handle implementation without overlapping DSS/UBM work. - **Reporting protocol**: Monthly progress updates for leadership, highlighting completed tasks and adjusting timelines based on emerging challenges. ## Timeline Development and Metrics A dynamic, living timeline will track progress toward year-end goals, adapting to unforeseen obstacles. - **Holistic view**: Integrating inputs from all teams into a master tracker, tagging dependencies (e.g., "CSM" or "UBM") to avoid duplication. - **Success metrics**: Measuring reductions in manual interventions, faster report generation, and adherence to revised deadlines (e.g., shifting a task from October 31 to November 4). - **Automation focus**: Targeting recurring issues (e.g., pass-through amount discrepancies) for scalable solutions rather than one-off fixes. ## Next Steps for Implementation Immediate actions include socializing the new process and refining high-priority items. - **Template deployment**: Sharing Slack-based requirement templates with teams to standardize submissions. - **Kickoff workshops**: Running sessions with Tara’s group to detail October deliverables, using problem statements to define scope. - **Leadership alignment**: Preparing a consolidated timeline for Abhinav by end-of-month, emphasizing resource constraints and progress milestones.
Error Triage Plan
## Summary ### Current System Challenges and Progress The team is addressing a recurring backlog issue in invoice processing, noting that each month presents a distinct major problem to solve. Last month's focus was integrating PDFs into the pipeline, which has been resolved using GTX/BTE solutions and LLM implementation. The current bottleneck is downstream processing, specifically pre-bill pay validation. While some solutions are straightforward, others require multi-step optimization. Despite challenges, the situation has improved from the previous month where root causes were unidentified. Key developments include: - **Downstream focus**: Efforts are now concentrated on pre-bill pay validation after successfully tackling PDF ingestion. - **Error pattern evolution**: New issues like error codes 20/24 emerge monthly, but established playbooks exist for resolution. - **Progress visibility**: Power BI and UVM integrations are being implemented to enable daily reporting and track fixes. ### Error Management Framework A structured approach to error triage is being developed to replace ad-hoc troubleshooting. The strategy involves categorizing errors by source (e.g., DSS, UBM) and complexity, recognizing that single error codes often mask multiple underlying issues. Critical components include: - **Centralized error registry**: Creating a master list of all error codes with root-cause analysis (e.g., error 2020 stems from 8+ potential missing fields). - **Ownership mapping**: Assigning specific errors to teams via tagged rows in a shared tracking system (Excel/Confluence). - **JIRA integration**: Linking error tickets directly to the registry for real-time status updates and dependency mapping. ### Process Optimization Initiatives To scale operations, the team is shifting from one-off communications to systematic prioritization. This includes instituting weekly 15-minute JIRA board reviews for task sign-offs and discouraging informal side-channel requests. Key improvements: - **Prioritization cadence**: Implementing weekly checkpoint meetings to validate task stages and alignment. - **Documentation drive**: Scheduling dedicated working sessions to capture historical resolution workflows for recurring errors. - **Asynchronous collaboration**: Using shared tracking files to enable cross-team updates without meetings. ### Debugging and System Knowledge Gaps Understanding expected system behavior remains a hurdle, particularly for legacy processes previously handled manually. This knowledge gap impedes automated solutions for errors like 3318 (integrity checks) and 2020 (missing fields). Challenges include: - **Undefined benchmarks**: Teams lack documentation on historical manual resolutions (e.g., how operations previously auto-resolved specific errors). - **Cross-functional alignment**: Developers need clarity on valid data states (e.g., whether null/zero values are acceptable for specific fields). - **Context transfer**: Scheduling focused sessions with subject-matter experts to document expected behaviors per error code. ### Strategic Next Steps The team outlined immediate actions to unblock critical path items while building sustainable processes. Focus areas: - **Shared error registry**: Finalizing a Confluence/Excel tracker this week with error codes, ownership, and resolution paths. - **Targeted working sessions**: Holding weekly 60-minute deep dives (starting next week) to document resolution logic for top-priority errors. - **Backlog hygiene**: Establishing clear criteria to distinguish between quick fixes (e.g., missing meter serials) and complex multi-ticket issues.
Faisal <> Sunny
## Household Management Strategies The discussion centered on contrasting approaches to household routines, drawing inspiration from literary references while emphasizing practical adaptations for modern life. ### Structured Weekly Routines in Literature *Little House in the Big Woods* presents a highly regimented weekly schedule: - **Fixed daily tasks**: Monday (washing), Tuesday (ironing), Wednesday (minting), Thursday (churning), Friday (cleaning), Saturday (baking), and Sunday (resting). - **Purpose of rigidity**: This historical approach aimed to create predictability and reduce daily decision-making burdens through strict task allocation. ### Modern Adaptation for Efficiency The speaker explicitly rejects replicating this rigid structure but adopts its core principle: - **Core philosophy**: Actively eliminating decision fatigue by establishing recurring patterns, especially during high-demand periods like mid-summer. - **Practical implementation**: While not matching the book’s daily specificity, maintaining consistent *categories* of tasks weekly (e.g., cleaning, meal prep) without fixed days. - **Seasonal flexibility**: Routines intensify during predictable high-stress periods (e.g., summer) when mental bandwidth diminishes, acknowledging the need for adaptable systems over inflexible schedules. ### Key Benefit: Reducing Cognitive Load The primary value lies in minimizing daily choices about household management: - **Strategic repetition**: Deliberately repeating essential tasks weekly creates automaticity, freeing mental resources for more complex decisions. - **Customization emphasis**: Success hinges on personalizing the system ("whatever way that works for you"), rejecting one-size-fits-all solutions in favor of individual sustainability.
OpenAI Monitoring
## OpenAI Access and Monitoring Concerns There are significant access limitations to OpenAI resources within the Constellation tenant. Currently, only Robin (not a core team member) has full access, creating a single point of failure for monitoring and issue resolution. The team experienced delays (2-3 weeks) in OpenAI responses recently, highlighting the risk. While notifications for budget overruns or usage thresholds are believed to be configured, proactive monitoring of response times and system health is hampered. Granting access to core team members (Matthew, Shake, Merch) requires a formal process: - Submitting tickets through Constellation. - Completing Z-key training and special admin access requests. - Involving Chris, who understands the full procedure (estimated 1-2 week timeline). - Microsoft business contacts may be leveraged to expedite ticket resolution due to the significant commercial relationship. ## Pipeline Deployment Status and Challenges The deployment pipeline for the FDG Connect app (DDA) is functional but **not yet production-ready**. Unexpected errors emerged during development, preventing full automation. Consequently: - **Immediate deployments will require manual processes.** Documentation exists to guide these manual steps. - Matthew possesses the necessary knowledge to handle deployments upon his return. - The automation effort aimed to simplify future deployments but encountered unforeseen technical hurdles that remain unresolved. ## Billing and Administrative Focus The team is prioritizing the resolution of outstanding billing issues related to specific invoices. This administrative task is expected to consume significant effort over the next few weeks. The focus is on clearing these pending items, and no immediate action is required from other team members at this stage. This serves as an awareness update on current priorities.
DSS Check-in
## Natural Gas Meter Issue Resolution The fix for the natural gas meter issue has been successfully deployed to production, preventing future invoices from being affected. Testing confirmed no immediate issues with recent invoices. However, unresolved challenges remain regarding reprocessing of historical invoices impacted prior to the fix. - **Reprocessing strategy for historical invoices**: Manual identification of affected bills is required since automated bulk updates aren't supported. Vendors or clients with natural gas meters should be targeted for reprocessing. - **Reprocessing mechanism**: Bills can be manually moved to the data acquisition workflow for automatic reprocessing via the file watcher system. - **Timeline clarification**: The problematic bill identified was processed on August 30th, confirming it occurred before the fix deployment. ## Bill Block Location Mapping Challenges An integrity check error (3018) occurs when bills aren't mapped to locations, particularly when service addresses are missing. This blocks bill processing and requires systematic resolution. - **Root cause analysis**: Bills without service addresses in the "general information" section fail UBM requirements. - **Proposed solution**: Implement an integrity check in DSS to flag bills with blank service addresses, routing them to data audit for manual intervention instead of "ready to send". - **Knowledge gap**: Further clarification is needed regarding UBM's location mapping logic and mandatory fields. ## FDG Connect Output Filter Fix The FDG Connect issue related to output filters in counters has been resolved and deployed. Validation confirmed functionality restoration with no outstanding issues reported. ## Power Factor Unit of Measurement Adjustment A configuration change ensures power factor readings won't include unit of measurement values. - **Implementation rationale**: Power factor is a dimensionless quantity, making unit labels unnecessary. - **Technical approach**: The system will now leave the unit field blank for power factor entries while retaining proper labeling. ## Ticket Status Review and Cleanup Multiple previously resolved tickets were confirmed as operational and moved to "done" status: - **Excluded client/vendor configurations**: Functionality tested successfully with no issues. - **PPG history build**: Deployment completed without problems. - **Download report feature**: Operational as expected. - **Missing invoice number handling**: Fix deployed and validated. ## Work Prioritization and Next Steps Immediate focus will be on identifying bills requiring reprocessing due to the natural gas meter issue and addressing the service address mapping challenge. - **Issue prioritization**: Power factor unit adjustment and location mapping errors are top priorities. - **UBM error analysis**: Further investigation into frequent manual interventions will identify systemic improvements. - **Collaboration plan**: Alignment with UBM specialists is needed to clarify location mapping requirements and mandatory fields.
Mantis <> Navigator API Integration
### Prospect The prospect serves as the product owner and product manager for the utility bill management platform within their organization. They focus on the product and technology aspects, including data acquisition from builds and integrating acquired data into cloud applications like Navigator. Based in Baltimore, they recently took over responsibilities from a predecessor (Chris Busby) and are now leading efforts to modernize data-handling processes. Their role involves close collaboration with internal teams (e.g., Rachel Beck’s team) and external partners to streamline workflows. ### Company The company operates within the energy sector, specifically under the Constellation Energy brand and its parent entity (Exelon). It leverages the Navigator platform for utility bill management and data-driven cloud applications. The organization handles data acquisition from utility builds and integrates it into proprietary systems, supporting broader initiatives across Constellation Energy and affiliated entities. Their operations emphasize scalability, data accuracy, and reducing manual intervention in data exchange processes. ### Priorities - **Automating Manual Processes**: Replacing error-prone email-based CSV exchanges with an API or UI-driven solution to eliminate keystroke errors, file version conflicts, and communication delays. - **Long-Term Technical Integration**: Prioritizing a direct API connection over interim solutions (e.g., SFTP) to ensure scalability, avoid redundant steps, and enable future system interoperability. - **Data Flow Clarity**: Defining clear "swim lanes" for data movement between teams, including standardized methods for account additions, drops, and changes. - **Proactive Problem Prevention**: Addressing workflow inefficiencies now to prevent escalation, particularly as data volumes grow and processes become more entrenched. - **Documentation-Driven Development**: Using detailed field mappings and transaction requirements (e.g., vendor-level filters, utility-specific idiosyncrasies) to guide API/UI design, ensuring alignment with existing CSV structures.
Export File Issues
## Output File Format Issues Persistent problems exist with output files causing rejection during UBM import due to blank lines. The core issue involves DSS incorrectly generating bill blocks with zero values for non-existent commodities, creating invalid CSV formats that fail validation. - **Blank line generation**: When no supply charges exist (e.g., water/sewer bills), DSS inserts a line with blank meter data and zero-filled charges, violating UBM's ingestion requirements. This manifests as "commodity observation missing" errors. - **Previous fix status**: A patch was deployed weeks ago to suppress these blank lines, but recent failures suggest it may have regressed or been ineffective. Testing had initially confirmed resolution, yet daily broken batches persist. - **Impact**: Manual intervention is required to edit CSVs (e.g., deleting blank lines), but date conversion errors during edits compound the problem. Approximately 5+ batches fail daily, delaying processing. ## Daily Batch Processing Challenges Broken batches requiring manual fixes are reported almost daily, straining resources and delaying bill pay operations. - **Error patterns**: Most recent failures stem from the blank-line issue, indicating the DSS fix may not be fully functional in production. Examples include batches from Citra Country Meadows and Monsenh (Prepay). - **Workaround limitations**: Manual CSV edits risk introducing new errors (e.g., date format corruption), making this unsustainable. The team prioritizes urgent batches but lacks bandwidth for comprehensive analysis. ## Natural Gas Classification Errors DSS mislabels natural gas costs as "generation" instead of correct categories like "wellhead use" or "wellhead use cost." - **Scope**: This affects accurate billing representation, as "generation" is reserved for electricity/solar contexts. Correct categorization is essential for valid UBM ingestion. - **Resolution plan**: The issue is confirmed as a high-priority bug, with a fix slated for development after validation. ## Meter Configuration Blockers "Do not use" and "other" meter types halt processing in the "ready to send" stage, requiring manual rollbacks. - **Root causes**: - *EnergyCap meters*: Prefixed as "do not use" but use incompatible observation types (e.g., `customer charge_pr` vs. Paired formats), causing validation failures. - *"Other" meters*: A legacy setup for non-commodity charges (e.g., one specific Constellation client) conflicts with current workflows. - **Impact**: These errors force manual retrieval of bills from "ready to send," creating bottlenecks. Screenshots of electric_do_not_use validation failures were shared for troubleshooting. ## System Integration Priorities Addressing critical path issues is essential to unblock automated processing. - **Immediate focus**: 1. **Natural gas mislabeling**: Fixing this will resolve a major audit blocker. 2. **"Do not use" meters**: Preventing these from populating outputs is next to reduce manual rework. - **Validation protocol**: Teams will tag specific bills (with IDs/screenshots) for rapid testing post-fix. Cross-system bill-ID linking (DSS↔UBM) is now possible to accelerate debugging. - **Constraint**: Non-urgent FDG Connect requests are deprioritized until bill-pay blockers are resolved.
UBM Planning
## Technical Issues and Permissions A significant portion of the meeting focused on unresolved technical issues, primarily concerning server access and permissions. Key problems included the inability to view applications or logs on the "Web1" machine, suspected to stem from permission restrictions affecting specific user accounts. Efforts to diagnose this were hindered by the lack of log visibility. Separately, there was confusion about whether the "Apple one" server was correctly writing data to the intended location, raising concerns about traceability and server management efficiency. - **Web1 server access failure:** Users couldn't see applications or logs on Web1, strongly indicating a permissions misconfiguration that needs urgent resolution to ensure operational continuity. - **Apple server data routing uncertainty:** Clarification was needed on whether the "Apple one" server was directing data to the correct destination (Web1), highlighting potential misconfiguration risks. - **Diagnostic limitations:** The inability to access logs on Web1 severely hampered troubleshooting efforts, emphasizing the critical need for proper permission sets. ## Team Availability and Resource Planning The availability of a key team member, Matthew, was uncertain. While he was expected back, confirmation of his return and availability to support the team was pending. This uncertainty directly impacted planning for coverage, especially regarding ongoing infrastructure work and user support during absences. - **Matthew's status:** His return was anticipated but unconfirmed, creating a gap in planned support and resource allocation for critical tasks. - **Contingency planning:** Ensuring other team members (like Faisal) had necessary administrative access (e.g., Microsoft 365 for user management, MFA, licenses) was prioritized to maintain operations in case Matthew remained unavailable. ## Infrastructure and Pipeline Development Progress on infrastructure development, specifically creating a VM and running local components for a pipeline, faced significant challenges. The complexity proved greater than anticipated, delaying the goal of enabling other team members to build infrastructure artifacts independently. Success here is crucial for team resilience and operational continuity. - **Pipeline construction difficulties:** Creating a functional VM and running local components encountered unexpected hurdles, slowing progress towards a deployable pipeline solution. - **Independence goal:** The primary objective was to empower the team to build infrastructure components without reliance on specific individuals, a capability currently delayed by the technical obstacles. ## UBM Data Issues and Prioritization Addressing data validation errors within the UBM (Utility Bill Management) system emerged as the top operational priority. Several specific data integrity issues were identified, requiring investigation and fixes: - **Commodity Type Error (e.g., Natural Gas):** A critical issue where "generation" line items were incorrectly associated with natural gas commodity types. The exact nature of the fix (simple exclusion rule or deeper logic change) required clarification from the business (Afton). - **Data Integrity Checks Failing Validation:** Multiple underlying issues were causing UBM build validations to fail, including: - Missing or incorrect service address information. - Account ID, Meter ID, and Commodity type discrepancies needing synchronization between systems (DSS and UBM). - Potential issues with Meter Serial numbers. - **Arithmetic Check (Total Line Items):** An immediate task involved implementing a check to verify that the sum of line items matches the bill total, routing discrepancies to an audit queue instead of processing. **Prioritization** focused on: 1. Implementing the arithmetic check for bill totals as requested. 2. Investigating and resolving the commodity type error for natural gas based on business feedback. 3. Tackling the broader set of data integrity issues (account IDs, addresses, etc.), starting with examples from error logs (e.g., 3018 errors). ## Access and Tooling Gaining appropriate access to necessary tools was a recurring theme. Specific actions included: - **EPM/UBM Platform Access:** Arrangements were made to grant access to the UBM admin platform (using Constellation email) to facilitate direct examination of bills and validation errors. Ruben was identified as the admin to provide this access. - **Error Log Review:** Access to a detailed spreadsheet listing UBM build validation errors (e.g., 3018 errors) was provided to aid in diagnosing and prioritizing data integrity fixes. This document was recognized as vital context for understanding the scope of the issues.
DSS/UBM sync
## Data Consolidation and Reporting Needs The meeting opened with a critical need to consolidate multiple task-tracking reports into a unified system to monitor task prioritization, actioning timelines, and overdue activities. - **Report consolidation objective**: Create a single dashboard to track metrics like time-to-action for tasks (e.g., invoice pulling), including instances of task snoozing or delays, to prevent operational backlogs. - **Urgency for timeliness**: Unaddressed tasks risk recurring invoice backlogs and payment delays, directly impacting financial operations and client satisfaction. - -- ## DSS Deployment and Testing Deficiencies Significant gaps in the Data Services System (DSS) deployment were highlighted, centering on siloed development and inadequate testing protocols. - **Testing shortcomings**: DSS was tested in isolation without end-to-end validation under real-world volumes, missing critical failures in the invoice-to-bill-pay workflow. - **Root causes**: No structured test scripts or comprehensive test cases were used; minimal ad-hoc testing failed to expose scalability issues or integration flaws with downstream systems like UBM (Utility Bill Management). - -- ## Data Accuracy and Auto-Fixing Challenges Auto-fixing invoice data during processing introduced inaccuracies requiring retroactive correction, with operational and analytical repercussions. - **Unit of measure errors**: Examples include "lamps" incorrectly assigned instead of "kWh," distorting cost calculations and requiring manual overrides. - **Retroactive correction strategy**: Engineering and operations teams must collaborate to identify auto-fixed invoices and develop systematic fixes, though operational knowledge gaps complicate resolutions. - **Ripple effects**: Uncorrected data skews analytics and may trigger client disputes, demanding urgent cross-functional alignment. - -- ## Prioritizing Validation Errors The team prioritized resolving "integrity check" validation failures, which block invoice mapping and cause downstream payment delays. - **Critical validations**: Failures stem from mismatches in four key fields-service account ID, meter ID, commodity type, and billing ID-preventing system mapping. - **Impact analysis**: These errors account for ~2,500-3,000 stalled invoices, directly affecting GL coding and timely payments, and are exacerbated by inconsistencies in commodity labeling (e.g., water vs. sewer). - **Tactical approach**: Download and analyze error reports to quantify volumes per issue type (e.g., meter serial number gaps) and deploy targeted fixes. - -- ## Resource and Operational Constraints Resource shortages and operational bottlenecks emerged as critical barriers to resolving data issues and reducing backlogs. - **Operational overload**: Teams face 1,000+ pending items in bill pay queues, with disconnection risks and priority invoices compounding delays. - **Resource gaps**: A dedicated data analyst or DB specialist is needed to research error patterns, but current staff are split between firefighting and onboarding tasks. - **Proposed solution**: Onboard a backend DB expert to assist with bulk data repairs and script development, reducing reliance on manual interventions. - -- ## Systemic Changes for Long-Term Stability Long-term fixes may require redefining UBM's core matching logic and automating data repairs to prevent recurring issues. - **Rethinking UBM's matching logic**: Overreliance on meter/account-level matching is error-prone; simplifying to vendor and account number could improve reliability. - **Automation urgency**: Templatized bulk-upload workflows for data repairs are essential to avoid taxing senior DB resources weekly. - **DSS and UBM alignment**: Address immediate DSS fixes (e.g., commodity-type errors) while planning UBM enhancements to phase out stopgap measures.
Daily Priorities
## Summary ### MedXL AP Test File Status The MedXL AP test file was discussed, with confirmation that a version exists but uncertainty about whether it was sent to MedXL. - **AP test file status**: A version of the test file is available, though it's unclear if it was previously sent to MedXL. It will be forwarded regardless to facilitate testing. - **Test scope limitations**: Only a subset of bills was used for testing due to the volume (thousands of bills) and complexities like utility checks, making comprehensive testing impractical. ### Tableau Reporting Timeline The Tableau test file, requiring removal of a specific column, is targeted for readiness by October 20th to allow two weeks of testing. - **Development progress**: The timeline is deemed feasible based on current developer capacity, pending formal sign-off of requirements. - **Requirement validation**: A Confluence document outlining field mappings will be shared with stakeholders for confirmation before finalizing the report. ### Customer-Specific Updates Several clients had active items reviewed, including amendments, billing structures, and onboarding priorities. - **Saint Elizabeth history completion**: Dependent on an internal review by Anita, with an expected wrap-up by the end of the week. - **Banyan billing request**: Their requirement to break down summary bills per unit was deemed unfeasible without creating mock bills for each unit-a process noted as time-prohibitive under current constraints. - **Amendment prioritization**: Prepaid bill pay takes precedence, delaying review of other customer amendments until mid-week. ### AP Implementation Pipeline The schedule for multiple clients' AP integrations was outlined, with varying stages of completion. - **Immediate priorities**: Michaels (targeting October 20-21), Rental, and Rainier (October 24) are in active development or testing phases. - **Upcoming integrations**: Westover is on track for November 7, while MedXL, Covia, and BP remain in discussion phases without confirmed timelines. - **Resource constraints**: Michael’s enrollment bill mapping was delayed due to team member unavailability last week, but follow-ups are underway. ### New Client Onboarding (Simon) Simon’s AP requirements need alignment with their existing systems, prompting a technical review. - **File standardization**: Their requested AP format appears similar to Victor’s NG standard file, enabling potential reuse if validated. - **Coordination plan**: A recurring technical call on Thursday at 11 AM ET will be used to confirm file specifications and address discrepancies. - **Lafayette College update**: Their AP setup (non-API) is progressing, with a meeting confirmed for Friday morning. ### JVM Contract Termination Processes to halt JVM’s AP and payment files were clarified following their contract end. - **Final processing cutoff**: Bills pulled by Friday, September 26th, represent the last batch for processing; no new bills will be ingested. - **System deactivation steps**: Entitlements must be revoked to stop AP automation, and the customer marked as inactive to prevent future bill flow. - **Operational handoff**: Agents or operators will execute the entitlement shutdown once existing queues are cleared.
DSS Check-in
### Summary The meeting focused on project updates, priorities for the week, and critical technical issues requiring immediate attention. Key topics included progress on the manual client exclusion feature for FTG, the onboarding of new engineers, and a shift in focus toward resolving high-priority bugs in the DSS-to-UBM pipeline. Enhancements unrelated to critical failures will be deprioritized. A significant issue involves bills without invoice numbers causing payment processing errors, where the LLM incorrectly captures account numbers instead of billing IDs. Another lower-priority issue involves missing charge and tax data in bill summaries. ### Wins - Manual client exclusion for FTG is nearing completion and expected to be finalized today. - Identification of a workaround for missing invoice numbers by using billing IDs (Bit ID) found in bill headers. ### Issues 1. **Incorrect invoice number capture**: LLM extracts account numbers instead of invoice numbers for bills lacking explicit invoice fields, causing payment rejections by utilities. 2. **Location mapping discrepancies**: Address variations (e.g., "Road" vs. "Rd") prevent bill-to-account linking in UBM, requiring fuzzy logic standardization. 3. **Missing bill data**: Some bills lack other charges, credits, and taxes in summaries, though this is a lower priority. 4. **Account number formatting**: Spaces, dashes, or special characters in account numbers need stripping to ensure consistency. 5. **Data source conflicts**: Account number errors originate from DB codes, not LLM extraction. ### Commitments - **Team member**: Complete manual client exclusion for FTG by end of day. - **Team member**: Address DSS handling of invoice-less bills next, using billing IDs (Bit ID) as a solution. - **Manager**: Create and prioritize tickets for critical issues (invoice number handling, location mapping) and lower-priority items (missing charges/taxes). - **Team member**: Review onboarding session recording for new engineers to align with their integration.
DSS/UBM sync
Bill Error Validation
UBM Error
Web Downloader Issues
## Summary ### Task Duplication Issue and Short-Term Fix A significant problem was identified where task duplication occurs when the DS upload filter is inactive, causing redundant work. When files are uploaded, tasks are removed but subsequently repopulated, leading team members to unknowingly reprocess the same accounts multiple times. This results in duplicated effort (sometimes triple the work) and strains resources, as completed tasks are unnecessarily redone. - **Proposed immediate solution**: Setting the DS upload filter to inactive was confirmed as a viable short-term fix to prevent this duplication. This adjustment will stop tasks from being removed and repopulated after uploads. - **Implementation plan**: The fix is already captured in an existing ticket and will be prioritized at the top of the to-do list for swift execution. ### Vendor Exclusion and Report Requests Progress on vendor exclusion work is nearly complete, with an estimated one hour remaining. Prioritization was emphasized to avoid partially completed tasks. - **Vendor exclusion status**: The task is in its final stages and will be finished before shifting focus to other priorities. - **New report requirement**: A query is needed to generate a list of all web-enabled vendors, including vendor names, IDs, and URL links. This data is required for the global team and will be formalized via email and a new ticket. - **Report consolidation initiative**: An effort is underway to catalog all active reports, their purposes, and creation dates. Focus will be placed on distinguishing legacy SSRS reports from newer Power BI migrations (e.g., "System Status New," "DSS Prod") to identify redundant or outdated assets. ### UBM Bill Processing Challenges Critical issues are occurring during UBM bill processing, where numerous bills fail integrity checks (multiple errors per bill) and data verification stages (Data Verification 1 and 2). - **Current approach**: The team is manually addressing these failures as a stopgap to meet month-end deadlines. - **Long-term solution needed**: Starting next week, a deep dive will analyze root causes of these failures, which may involve issues with data assessment, IPS, or output services. Collaboration with the UBM technical team (e.g., accessing UBM schemas) is essential, as the current UBM process is a "black box" for the team. ### System Report Documentation and Cleanup A comprehensive audit of existing reports is in progress to document their usage and relevance, particularly focusing on recent creations and migrations. - **Legacy vs. active reports**: Paginated reports are confirmed as legacy artifacts migrated from SSRS to Power BI. Recent reports (e.g., those created in the past month) need clear documentation of their purpose. - **Next steps**: The team lead will request a detailed list of recently created/modified reports to evaluate their ongoing utility and plan potential consolidation.
DSS Check-in
## Summary ### Immediate Bill Payment Processing Challenges The team faces urgent pressure to resolve bill payment processing errors by the next day to prevent customer impact, particularly for end-of-month payments. Approximately 11,000 errors are backlogged, with new errors accumulating weekly, creating a "needle in a haystack" scenario for identifying critical issues. While not all errors are critical (e.g., some are flagged as DSS-related but non-critical), the volume obscures high-priority fixes. The immediate goal is to reduce error counts to enable transparent customer communication about progress and mitigate reputational damage from delayed resolutions. ### DSS System Issues and Access Constraints Upstream data ingestion via the DSS system is identified as a recurring source of errors, requiring urgent attention to prevent future bottlenecks. Key challenges include: - **Output service code access**: Critical troubleshooting is hindered because the codebase resides solely on Matthew’s local machine, with no team access. Matthew’s unavailability until Monday delays resolution, forcing workarounds. - **Error origin ambiguity**: Discrepancies exist over whether errors stem from DSS formatting, the output service, or UBM processing. Examples of "yellow-flagged" DSS errors need analysis to determine if they require upstream fixes or UBM workflow adjustments. - **Long-term solution dependency**: Resolving core issues may require integrating DSS fixes with UBM changes, but UBM’s complex codebase and slow iteration cycles (e.g., changes taking weeks) complicate coordination. ### Reporting and Visibility Gaps A lack of real-time, consolidated reporting exacerbates response delays and misalignment. Critical developments include: - **New Power BI dashboard**: A pre-production report is being developed to unify bill IDs, customers, and error statuses-addressing the absence of a single source of truth. This aims to replace fragmented Excel exports and accelerate decision-making. - **Interim Appsmith solution**: A read-only UI connected to the UBM database allows ad-hoc querying and exports, reducing dependency on dev teams for data pulls. - **Report consolidation initiative**: Over 40 existing Power BI reports will be cataloged in a living document, with plans to sunset redundant or one-off reports (e.g., those used less than biannually) to streamline focus. ### Team Coordination and Process Improvements Operational inefficiencies require structural changes to handle recurring crises: - **Recurring check-ins**: Biweekly (Tuesday/Thursday) syncs will be established for core members (e.g., Faisal, Jay, Tim) to monitor error resolution and prioritize tasks without leadership overhead. - **Expedited issue handling**: A dedicated triage team or SLA-driven process for P0/P1 issues will be explored to prevent urgent fixes from stalling in sprint backlogs. - **Communication channels**: Transitioning from Teams messages to threaded channels (e.g., "DS Ops and Product") improves issue tracking and reduces context-switching. ### UBM Team Workload and Prioritization The UBM team’s capacity constraints and process rigidity delay critical fixes: - **Workload imbalance**: Customer onboarding and data repairs consume significant resources, leaving minimal bandwidth for error resolution. A proposed triage team could handle exports/imports separately. - **Prioritization inertia**: The team’s adherence to legacy workflows (e.g., manual Excel handling) conflicts with urgent needs. Restructuring work "buckets" (e.g., feature development vs. expedites) and setting clear SLAs are proposed solutions. - **Mindset shift**: Encouraging the team to challenge "how it’s always been done" (e.g., fixing upstream issues in DSS instead of UBM) is essential for long-term efficiency. ### DSS Team Expansion and Integration Three new India-based developers will join to bolster DSS capabilities, focusing on: - **Backend support**: Assisting Shake with error resolution and backlog management, leveraging their UBM database experience. - **UBM integration**: Training on DSS semantics and UBM workflows to facilitate cross-system troubleshooting and reduce silos. - **Sunsetting DSS**: Long-term plans include phasing out DSS by migrating its functions to UBM (targeting H1 2025) to eliminate redundancy and simplify architecture. ### Knowledge Sharing and Documentation Bridging knowledge gaps is critical for sustained progress: - **UBM training for DSS team**: Shake and new members will learn UBM’s data model and error codes to diagnose issues faster, using real examples from recent reports. - **DSS schema documentation**: Creating a reference for UBM-related DSS workflows to clarify handoff points and dependencies. - **Error code analysis**: Collaborating with Jay to categorize errors (e.g., DSS vs. output service vs. UBM) and build a remediation roadmap for October.
Bill Error Validation
## Summary ### Billing Processing Challenges and Backlog The meeting centered on addressing a significant backlog of approximately 11,000 bills requiring processing, with critical urgency due to month-end payment deadlines. Key issues included unresolved validation errors preventing bills from progressing through the system, risking service disconnections for clients if payments were delayed. - **Backlog composition**: - 5,177 bills in "integrity check" (structural validation failures). - 3,220 in "data verification 1" and 2,678 in "data verification 2" (data-specific issues). - **Critical errors**: - Structural issues like misaligned observation columns (e.g., codes 2007, 2008) and unknown observation types halted processing. - Data mismatches (e.g., billing totals not aligning with line items) required manual verification against PDFs. ### Strategic Approaches to Error Resolution Participants debated solutions to bypass non-critical validations and expedite bill processing, balancing speed with data integrity. - **Auto-fix vs. disabling validations**: - *Auto-fix*: Preferred for preserving data integrity; automatically resolves non-critical errors (e.g., null values in non-essential fields) but requires reprocessing all 11,000 bills, risking system slowdowns. - *Disabling validations*: Faster for new bills but ignores existing backlog; deemed riskier for audit trails and future analytics. - **Prioritization**: - Focus on moving bills to payment status by auto-resolving all data verification errors *except* five critical codes (2024, 2012, 2030, 2004, and overlapping service dates). - For integrity checks, explore reclassifying errors like 2007/2008/2035 to data verification for auto-resolution. ### Operational Execution Plan A phased approach was agreed upon to clear the backlog while minimizing system disruption. - **Immediate actions**: - Auto-resolve non-critical data verification errors (DV1/DV2) by the next morning, excluding the five high-risk codes. - Operators to manually resolve critical errors (e.g., missing line items, subtotal mismatches) flagged in the system. - **Integrity check mitigation**: - Investigate moving 3,264 bills with specific error codes (2007, 2008, 2035) from integrity check to data verification for faster resolution. - Use bulk re-parsing in batches (1,000 bills at a time) to apply fixes, acknowledging potential platform latency during processing. - **Resource allocation**: - Leverage overnight/off-hours processing to maximize throughput. - Provide operators with targeted error reports (e.g., unmapped bill blocks) for focused manual intervention. ### Client-Specific Priorities Urgent processing for bill-pay clients (e.g., Victor) was emphasized to prevent service disruptions, with temporary deprioritization of analytics functionality. - **Victor’s backlog**: - ~1,000 mock bills required resolution, with 50 missing from DS/DBM sources. - Payment files prioritized for e-check processing to meet deadlines, leveraging reserve funds for staggered payments. - **Platform mindset shift**: - Temporarily treat UBM as a bill-pay/AP platform rather than an analytics tool, accepting "noisy" data to expedite payments. ### Forward-Looking Adjustments Long-term fixes were discussed but deferred in favor of immediate crisis management. - **System adjustments**: - Post-crisis, revisit error validation settings to prevent recurrence. - Enhance DS (Data Services) output quality to reduce upstream data issues. - **Communication**: - Maintain real-time updates via team chat to monitor progress and address emerging bottlenecks.
Virtual Account Linking
## Data Integrity Challenges Approximately 8,000-9,000 items are currently in UBM, with half requiring integrity checks due to mismatches in critical identifiers like billing ID, meter ID, utility type, bill type, and service ID. These discrepancies-often caused by formatting inconsistencies (e.g., extra spaces or dashes in account numbers)-prevent automated mapping to locations and disrupt downstream processes like AP file generation. ### Root Causes of Mismatches - **Formatting inconsistencies**: Dashes, spaces, or missing characters in identifiers (e.g., billing accounts) trigger integrity checks. - **Systemic gaps**: DSS (Data Services System) inconsistently handles data normalization, leading to duplicate virtual accounts and manual rework in UBM. - **Attribute misalignment**: New virtual accounts lack essential attributes, requiring manual intervention to populate fields for AP file integration. - -- ## Virtual Account Resolution Strategy A method was developed to link newly created virtual accounts to existing ones by matching identifiers across spreadsheets split into "new" and "existing" entries. The process involves: 1. **Filtering closed/irrelevant accounts** to narrow the dataset. 2. **Matching logic**: Prioritizing links where billing accounts, service sites, and utility types align uniquely. 3. **Handling ambiguities**: For accounts with multiple potential links, manual review determines the correct mapping based on location and billing consistency. ### Execution and Progress - **Automated matching success**: 90% of accounts were linked automatically using concatenated identifiers, reducing the backlog significantly. - **Remaining challenges**: 10% require manual resolution due to non-unique matches or complex scenarios (e.g., presheet systems with overlapping accounts). - -- ## Downstream Impact of DSS Issues Backlogs in DSS propagate to UBM, creating "virtual accounts" without attributes during bill ingestion. This cascades into: - **Manual workload**: Teams must retroactively assign attributes to ensure AP file compatibility. - **Systemic bottlenecks**: Fixing one backlog in DSS inadvertently generates new backlogs in UBM, highlighting flawed data handoffs. ### Critical Pain Points - **Attribute assignment**: Virtual accounts created during bill processing lack metadata, stalling financial workflows. - **Inconsistent DSS outputs**: Variations in data formatting (e.g., spaces/dashes) force UBM into reactive cleanup cycles. - -- ## Long-Term Mitigation Approaches To prevent recurring issues, strategic fixes were proposed: 1. **Upstream normalization**: Use "raw account number" fields (without spaces/dashes) from DSS outputs to improve UBM matching accuracy. 2. **Vendor-specific exclusions**: Allow LLM (data processing logic) to skip problematic vendors or formats prone to errors. 3. **Billing account consolidation**: Leverage UBM’s existing "billing account" view to group virtual accounts by vendor and cleaned identifiers, reducing fragmentation. ### Technical Considerations - **Training LLM models**: Prioritize activating correct sub-accounts/meters in DSS to avoid generating redundant entries. - **Edge cases**: Summary bills with multiple locations complicate raw account number adoption, requiring nuanced handling. - -- ## Immediate Action Plan A framework was established to address the remaining 10% of unresolved accounts: - **Targeted cleanup**: Review ambiguous matches (e.g., accounts with multiple linkable options) by cross-referencing locations and billing contexts. - **Collaborative testing**: Validate the spreadsheet-based linking process with a subset of accounts to ensure scalability before full deployment. - **Upstream monitoring**: Explore fuzzy-matching logic in UBM to preempt mismatches during DSS data ingestion. ### Key Outcomes - **Efficiency gains**: Automating 90% of links demonstrates replicability for future batches. - **Ongoing prioritization**: Focus on high-impact DSS fixes (e.g., meter/sub-account activation) to reduce virtual account proliferation.
Integrity Check Excel Review
## Virtual Account Linking Process The core discussion focused on developing a methodology to link new virtual accounts to existing ones in the billing system. This aims to resolve data mismatches and ensure accurate invoice processing by associating accounts with correct attributes (e.g., utility type, service category). ### Data Matching Methodology A spreadsheet-based approach is being used to identify linkable accounts by cross-referencing key fields. - **Matching criteria**: Accounts are evaluated based on identical billing account numbers, locations, vendor types, and utility categories (e.g., natural gas, electricity). - **Automated validation**: Formulas flag potential matches by concatenating critical fields (e.g., location + account number + utility type) and counting duplicates. For example: - *Single-match scenarios*: If only one existing account matches all criteria, it’s automatically linked. - *Multi-match scenarios*: Multiple matches require manual review to prioritize active accounts over closed ones. ### Technical Challenges Several data inconsistencies complicate the linking process: - **Utility-type conflicts**: Accounts with identical meter numbers but mismatched service types (e.g., "supply only" vs. "distribution only") cannot be linked automatically. - **Data formatting issues**: Extra spaces or inconsistent entries in fields like meter IDs create false mismatches. - **Attribute gaps**: New virtual accounts lack payment-critical attributes (e.g., rate schedules), making linking essential for operational functionality. ### Resolution Strategy The team plans to: 1. **Compile exception lists**: Focus on newly created virtual accounts flagged during data verification. 2. **Prioritize active accounts**: Link new accounts to active counterparts to inherit necessary attributes. 3. **Manual intervention**: For complex cases (e.g., >2 matches), manually verify and select the correct account based on activity status. ### Next Steps - **Process refinement**: Finalize the spreadsheet methodology to include handling instructions for all edge cases (e.g., closed accounts, conflicting service types). - **Collaborative review**: Discuss unresolved challenges in the upcoming meeting to delegate tasks and accelerate resolution. - **System updates**: Provide Ruben with validated virtual account IDs for bulk linking once matching logic is solidified.
UBM Planning
## Summary ### Invoice Processing Challenges The team addressed a critical backlog of approximately 7,000-9,000 invoices stuck in the "integrity check" stage within the UBM platform, risking late payments, legal actions, and customer dissatisfaction. - **Root cause**: Invoices fail integrity checks due to mismatches between DSS-captured data and existing UBM account setups. Key discrepancies include: - *Account number variations*: Hyphens, spaces, or extra characters (e.g., "12345" vs. "12345E") prevent exact matching. - *Utility type conflicts*: DSS may classify services differently (e.g., "water" vs. "fire protection") compared to historical UBM configurations. - *Meter or service ID differences*: Inconsistent formatting or data capture in critical fields like meter numbers. - **Impact**: Without resolution, invoices cannot progress to payment-ready status, delaying AP file generation and increasing exposure to service shutoffs (especially for electricity accounts). ### Short-Term Resolution Strategy Immediate actions focus on clearing the backlog by Friday through bulk account linking and prioritization. - **Location mapping**: Completed for most accounts yesterday, enabling invoices to advance to the data verification stage. - **Virtual account linking**: - *Approach*: Identify duplicate virtual accounts (same vendor and billing ID) and link new DSS-generated accounts to existing UBM records via backend scripts. - *Limitations*: Linking only works for identical utility types (e.g., water-to-water); cross-type links (e.g., water-to-sewer) require manual bill edits. - **Prioritization**: Focus first on high-risk electricity accounts to prevent service disruptions, deferring lower-urgency utilities (e.g., water/sewer discrepancies). ### Technical Constraints and Workarounds System limitations complicate rapid fixes, necessitating creative solutions. - **Editing challenges**: Modifying bill attributes triggers versioning, making bulk edits impractical. Instead: - *Linking accounts*: Preserves existing attributes (e.g., GL codes) and avoids duplicate payments by merging records. - *AP file requirements*: Payment relies on bill IDs, but virtual account attributes (e.g., GL codes) are essential for downstream processing. - **Reporting gaps**: No existing report flags "near-match" accounts. Workaround: Export Account IDs reports, combine vendor/cleaned billing ID fields, and manually identify link candidates. ### Long-Term System Improvements Proposals to prevent recurrence include normalizing data formats and enhancing matching logic. - **Fuzzy logic implementation**: Future scripts could auto-flag 90%+ matches for review (e.g., ignoring hyphens/spaces in account numbers). - **Attribute standardization**: Align UBM setups with DSS outputs (e.g., omit suffixes like "E" if not on original bills). - **UBM construct changes**: Allow linking across utility types to reduce manual interventions. ### Execution Plan for Backlog Clearance The team defined steps to reduce integrity-check bottlenecks by Friday. - **Data extraction**: Tim to generate Account IDs reports highlighting vendor/billing ID pairs for duplicate evaluation. - **Team deployment**: All available resources (e.g., Maurice, ML, Faisal) to review reports and flag link candidates. - **Backend automation**: Jay to run bulk-linking scripts using provided Excel files once mappings are confirmed. - **Ongoing monitoring**: Continue location mapping for new invoices entering the system.
DSS Check-in
## Task Status Updates Progress on several development tasks was reviewed. The exclusion of forum builds is nearly complete and will be pushed to the development environment for testing before moving to the staging phase. Download reports functionality has been moved to testing, awaiting validation. Snooze activity tracking has been deployed to production but currently shows zero counts because it only captures new activities post-deployment-historical data isn't retroactively included. ## Access Management Access to the **DSS processing email account** requires urgent attention. The account exists, but two-factor authentication is tied to an unavailable phone number (potentially Sulab's or Chris's). Mitchel will be contacted to: - Reassign the phone number and set up Microsoft Authenticator. - Designate new owners and grant access to relevant team members (including Sarah, Acton, and Mary). Separately, access to the output service remains blocked due to unresponsiveness from Matthew. Chris will be approached to facilitate contact, as this task is estimated to take under 30 minutes once access is granted. ## Agile Board Review and Prioritization Tickets on the agile board were evaluated for relevance and urgency: - **High-priority items**: - *Retry mechanism for database failures*: Ephemeral SQL Server errors (e.g., lock contention) currently require manual reprocessing. An automated job will retry failed processes every 4 hours using existing reprocess logic. - *Source identification for ingested files*: Labels (e.g., "Email Watcher," "BD File Watcher") will be added to indicate file origins (folders, web uploads) for better traceability. - **Deferred items**: - UI enhancements like *date formatting*, *highlighting failed audit lines*, and *smart-filtering toggles* are deprioritized until the DSS UI sees active operational use. - *Auto-rotation of sideways images* is unnecessary since the system processes tilted documents correctly. - **Documentation gap**: - Recreating the dev environment from scratch lacks clear steps. A guide covering authentication setup, resource configuration, and Azure dependencies (e.g., Cosmos, OpenAI) is needed. ## System Improvements and Technical Debt Key technical adjustments were discussed: - **Feedback loop integration**: Data on user corrections (original vs. modified values) is stored but not utilized. Generating failure reports or comparing DSS/UBM data is deferred until workflow stabilization. - **Cost optimization**: Reserved Azure compute instances were considered but rejected; the current pay-as-you-go model is cost-effective and sufficient. - **Data cleanup**: A toggle button already allows hiding resolved invoices from the main DSS view, addressing deletion requests. ## Future Considerations Third-party API use for zip-code validation (e.g., Google Maps) was noted as a low-priority enhancement. The feedback loop with UBM may be revisited later to align data changes, but this depends on broader system adoption.
DSS Check-in
## Priority Management and Roadmap Visibility Establishing clear priorities and a visible roadmap emerged as critical needs. The top priority involves exporting data from UBM to resolve approximately 5,000 integrity check issues currently causing bottlenecks, as bulk processing via file mapping is significantly more efficient than manual handling. A shared roadmap board is proposed to replace the binary "urgent vs. non-urgent" classification, enabling better prioritization of all necessary tasks while providing transparency into ongoing work and future plans. ## DSS System Enhancements and Fixes Key updates and issues within the DSS system were addressed: - **User Activity Tracking Fix:** A recent deployment successfully resolved issues with tracking user actions, evidenced by snoozed items dropping from 3,700 to zero. A high-level summary table showing download versus snooze counts is now requested to complement this fix. - **Duplicate Usage Investigation:** A critical issue involves duplicate charges appearing downstream (e.g., in QBM) that aren't present in the source system (VSS), strongly suggesting a problem within the Output Service layer. This issue appears isolated to the Royal Farms customer currently, warranting specific investigation into their Output Service configuration. - **Meter Data Consistency & Cleanup:** Significant effort is needed to address inconsistencies in how DSS processes bills, particularly concerning meter setup and key identifiers (vendor meter number, account number, SA ID, service type). Problems include: - DSS creating new meters/sub-accounts unnecessarily due to minor formatting differences (e.g., spaces/dashes in account numbers). - Inconsistent processing of identical bill formats (e.g., same rate code) leading to multiple build classes. - DSS relying solely on pre-June 2025 historical meter data, ignoring newer setups or updates. - Proposed solutions focus on implementing exclusions for known formatting issues and developing a method to clean up existing meter data to enforce consistency and reuse valid meters. ## Email-Based Invoice Processing Workflow A capability exists to process invoices by emailing PDF attachments to a dedicated DSS Processing mailbox, automating intake without manual downloads. Key aspects include: - **Current State:** The mailbox is currently overwhelmed with ~16,000 error reports from other DSS processes, obscuring new invoice submissions. - **Improvement Plan:** Clean up the mailbox by deleting or archiving old error reports. New emails with attachments are processed near real-time, with successfully handled emails moved to a "Processed" folder and failures remaining visible. - **Utility & Limitations:** This method saves time for straightforward invoice processing but lacks sender attribution in reporting. It's suitable for bulk emails without special instructions, while manually downloaded invoices remain necessary for cases requiring specific handling notes. - **Next Steps:** Test the workflow thoroughly in production and grant team access to the shared mailbox for monitoring before wider adoption. ## Ongoing Development and Testing Active development and testing efforts are underway: - **Hide Button Deployment:** A fix for the "Hide" button functionality is complete in development and scheduled for deployment to production. - **Commodity Tagging Fix:** A solution addressing issues with specific commodity tags (e.g., "FDG services") causing processing halts is ready in the test environment. Testing requires identifying sample bills exhibiting this failure mode to validate the fix.
Arcadia Migration
### Summary The meeting served as an introductory 1:1 between a new product owner and a team member focused on infrastructure. The product owner shared their background in product management, consulting, and startups, emphasizing their current focus on UBM and foundational DSS systems. The team member outlined their technical journey-starting in identity/access management and infrastructure roles before transitioning to cloud engineering (GCP)-and their current responsibilities within the project. These include infrastructure-as-code deployment, configuration adjustments for UBM, onboarding support, monitoring, SOC compliance procedures, and assisting with application support. Key topics centered on the ongoing Azure migration, described as complex due to Constellation’s restrictive environment and fragmented team structure. The migration requires extensive ServiceNow ticketing and coordination across isolated teams, with a "big bang" cutover approach (no rollback to GCP if issues arise). Concerns were raised about long-term sustainability if solutions are force-fitted into Azure without addressing underlying architectural mismatches. SOC compliance documentation was confirmed to reside in Confluence (not Google Drive), with SOC 2 readiness noted as a future priority after immediate migration challenges. Communication will primarily occur via Slack, though some team members face tooling hurdles due to dual laptop/virtual desktop setups. ### Wins - Successful adaptation of UBM infrastructure to meet evolving configuration requirements. - Established SOC compliance procedures for infrastructure, documented in Confluence. - Cross-functional support: Infrastructure expertise is being leveraged to assist application teams with onboarding and monitoring. ### Issues - **Azure Migration Constraints**: The process is slowed by bureaucratic layers, inflexible infrastructure policies, and disconnected team responsibilities (requiring ServiceNow tickets for basic tasks). - **High-Risk Cutover**: The migration plan lacks a rollback option from Azure to GCP, making stability and pre-launch testing critical. - **Tooling Fragmentation**: Team communication is hindered by split environments (virtual desktops for Azure, physical laptops for GCP), reducing accessibility on platforms like Teams. - **SOC Compliance Scalability**: Current SOC 1 processes may not seamlessly extend to SOC 2 requirements, demanding future rework. ### Commitments - **Risk Identification**: The infrastructure lead will proactively flag Azure migration risks (e.g., architectural gaps, sustainability concerns) and suggest mitigations. - **Collaboration Protocol**: Urgent Azure issues will be escalated via email threads, triggering ad-hoc meetings as needed. - **Documentation Access**: Confluence links for SOC procedures will be shared to ensure transparency. - **Ongoing Support**: The product owner will provide backup for Azure migration leadership and remain accessible via Slack/Teams for unblocking tasks.
DSS Alignment
### Customer The customer operates in utility bill management, focusing on processing and managing utility accounts and bills. Their role involves setting up accounts and meters manually, ensuring accurate data entry, and overseeing the integration of automated systems for bill processing. They work with a system that includes data acquisition, account setup queues, and an AI-driven processing model (LLM) to handle utility data. The customer's background emphasizes meticulous data validation and workflow optimization to support downstream operations for end clients. ### Success The most significant achievement highlighted is the implementation of an automated bill processing system that integrates AI (LLM) to handle high volumes of utility bills. This system successfully routes new accounts to setup queues and existing accounts to data acquisition workflows, streamlining initial processing stages. Additionally, the team has established protocols for identifying and excluding certain bill types (e.g., foreign or summary bills) from automated processing, ensuring critical accounts are handled manually when necessary. The integration with FDG Connect for vendor management also demonstrates effective coordination between systems for bulk operations. ### Challenge The primary challenge revolves around data inaccuracy caused by the automated system (DSS) creating duplicate or incorrect meters instead of using manually validated historical meters. This results in: - Redundant meters with mismatched observation types or units of measure, consuming excessive system resources and causing downstream reporting errors. - Inability to consistently exclude specific accounts, vendors, or individual bills from LLM processing, leading to manual overrides and workflow disruptions. - Reporting inaccuracies in download metrics, where team efforts (e.g., snoozing unavailable accounts) go uncredited, skewing productivity assessments. ### Goals Key objectives for the customer include: - **Data Accuracy and Cleanup**: Ensure the LLM uses original, manually validated meters for processing to eliminate duplicates and restore data integrity. Reprocess or clean existing erroneous data to free up system resources. - **Enhanced Exclusion Logic**: Implement multi-level exclusion capabilities to automatically bypass LLM processing for: - Foreign bills and summary bills (based on predefined page limits or vendor flags). - Specific vendors, clients, accounts, or individual bills via manual tagging in FDG Connect or vendor management interfaces. - **Workflow Optimization**: Route excluded bills to appropriate queues: - New accounts to setup queues. - Existing accounts to data acquisition workflows. - **Reporting Improvements**: Refine download reports to: - Credit only the first import per bill to accurately reflect team contributions. - Track snoozed accounts to quantify manual intervention efforts. - **Duplicate Detection**: Develop logic to identify and handle bills received through multiple channels (e.g., mail and web downloads) to prevent redundant processing.
AP File Standardization & Banyan
## Customer The customer operates in the **multifamily property management sector**, serving as an intermediary between financial systems and property management software platforms. Their role involves ingesting and integrating AP (Accounts Payable) files into third-party accounting systems like Yardi, Entrada, and RealPage. This integration is critical for streamlining payment workflows and ensuring compatibility with diverse property management software ecosystems. ## Success The most significant achievement has been the **successful automation of AP file processing** for clients, particularly through integrations with systems like Yardi. This automation has enabled seamless payment initiation, reduced manual data entry, and improved accuracy in financial operations. The product’s ability to generate standardized JSON and payment files has been pivotal in scaling client onboarding and maintaining operational efficiency across multiple property management platforms. ## Challenge The primary challenge revolves around **managing non-standard AP file requirements** due to inconsistencies across property management software versions and configurations. For example, clients using Yardi might require tweaked file formats based on their specific software version, leading to ongoing custom development work. This creates friction in resource allocation, as ad-hoc requests (e.g., field renaming, layout changes) often bypass formal prioritization processes. Additionally, unclear communication between sales, customer success, and development teams has led to mismatched expectations around customization timelines and costs. ## Goals - **Standardize AP file definitions** to minimize deviations and reduce reliance on custom development. - Establish a **centralized process for handling customization requests**, including clear documentation of "standard" vs. "custom" requirements. - Develop **client-facing guidelines** to clarify what modifications incur additional costs (e.g., formula changes vs. header renaming). - Implement a **ticketing system** to track development requests, improve visibility into issue resolution, and reduce dependency on fragmented email chains. - Align internal teams (sales, CSM, development) on protocols for escalating and approving non-standard requirements to prevent scope creep.
UBM Planning
## Summary ### Legacy System Naming Convention Challenges The team addressed a critical issue where the UBM system requires adherence to a legacy naming convention for file ingestion, causing processing bottlenecks. Key challenges include inaccessible legacy code repositories, lack of system maintenance, and downstream dependencies. Proposed solutions involve: - **Consulting historical experts**: Engaging Rachel and Afton to decode the legacy naming rules as a short-term workaround. - **Modernizing the ingestion protocol**: Advocating for a fresh approach to bypass legacy constraints entirely, given the shift to a one-to-one file relationship (one ZIP per PDF) which simplifies tracking. - **Negotiating with UBM stakeholders**: Exploring whether legacy naming rules can be relaxed for new data flows to avoid recurring issues. ### Team Capacity and Resource Allocation Resource constraints and redundancy risks were highlighted, with plans to onboard new developers and redistribute responsibilities: - **New developer integration**: Two full-stack developers (specializing in backend) will join to alleviate workload on the DSS/IPS ecosystem and provide coverage during absences. Access provisioning delays are pending resolution. - **Mitigating single points of failure**: Urgent need to replicate administrative access (e.g., ticketing systems) for emergency scenarios, especially during non-business hours when offshore teams are unavailable. A meeting with Cognizant’s application manager is planned to establish contingency protocols. ### Invoice Processing and Tracking Discussions focused on optimizing invoice ingestion and accurately attributing work: - **Defining "first touch" for reporting**: Clarifying that attribution should reflect the initial user who processes a document, not subsequent handlers, to accurately track outsourced team productivity. - **Automating manual workflows**: Evaluating methods to reduce human intervention in email-based invoice processing, such as routing vendor emails directly to DSS via a dedicated mailbox or shared folders. - **Bandwidth concerns**: Testing ingestion scalability before enabling customer-facing drop features to avoid overwhelming the system with non-portal invoices (e.g., large email attachments). ### System Performance and Monitoring The team reviewed real-time processing rates and queue management to prevent OpenAI-related bottlenecks: - **Throttling for stability**: Current settings (e.g., 2 files/10 minutes for BD files, 4 files/10 minutes for data acquisition) are optimized to avoid OpenAI timeouts, with no observed backlog. - **Monitoring blind spots**: Noted that web-uploader pathways (e.g., legacy portal uploads) bypass controlled queues, requiring future integration into the managed flow to enforce rate limits. - **Upstream queue vigilance**: Commitment to track data acquisition queues for unexpected growth, which would necessitate reconfiguring processing rates. ### Ingestion Pathway Expansion Explored scalable alternatives to manual invoice handling: - **Direct API integration**: Vendors can submit invoices via real-time API calls, already used by the UI. - **Folder-based automation**: Dropping files into designated shared folders for scheduled ingestion, mirroring the BDE workflow. - **Email-to-process automation**: Leveraging a dedicated mailbox (<dssprocessing@company.com>) to auto-ingest PDF attachments from forwarded emails, reducing manual downloads. - **Validation safeguards**: Implementing page limits (e.g., 10-page maximum) to reject non-invoice documents and conserve OCR credits. ### Outsourced Team Efficiency Addressed operational inefficiencies in outsourced data processing: - **Redundant manual tasks**: Highlighted that team members spend significant time downloading emailed invoices instead of portal-based retrieval, skewing productivity metrics. - **Automation prioritization**: Expediting development of customer-facing drop features to eliminate manual steps and reallocate efforts to high-value portal downloads.
UBM Planning
## Summary ### JIRA Board Transition The transition from the current task management system to a JIRA Agile board is underway but considered low priority. The board has been created, and the team is beginning to migrate tasks. A comprehensive review session is planned within the next two weeks to: - **Validate existing tickets:** Assess which items are still relevant, require closure, or need deletion. - **Document long-term improvements:** Capture essential context and insights for future system enhancements during the migration process. ### UBM File Delivery & Naming Convention Significant discussion focused on resolving issues with delivering files to UBM, particularly concerning the required zip file naming convention: - **Naming convention challenge:** The UBM team requires specific zip file naming patterns that differ from current DSS practices, especially regarding "build type" information. - **Dependency on legacy code:** Accessing Matthew's "Output Service" code is critical to understanding and replicating the legacy logic for generating the correct UBM file names, as this logic handles the necessary build type mapping. - **Access barrier:** Matthew's code resides locally on his machine, and he is currently unavailable, creating a blocker. Action will be taken to contact Matthew (potentially via Sunny) for a brief session to obtain the code or clarify the logic. - **Immediate workaround considered:** If Matthew cannot be reached promptly, exploring the possibility of obtaining the complete naming specification and edge cases directly from UBM was suggested as a temporary measure. ### DSS Build Status & Output Generation Progress was reported on integrating build status visibility directly within the DSS and modifying the output generation process: - **Build status display:** Implementation to show build status within the DSS is complete and pending validation. - **Output generation workflow:** When generating the final zip file (containing PDF and NDHS) for UBM, the system now automatically sets the build status to "Completed" and the workflow status to "Done," providing clear visibility of successful processing. ### Invoice Processing & Issue Management Strategies for handling invoice processing errors and managing reported issues were a key focus: - **Reprocessing mechanism:** Invoices can be reprocessed by resetting their build status to "Data Acquisition" and workflow status to "Waiting for Operator," triggering the automated system to reprocess them. - **Limitation on DSS audit failures:** Invoices failing DSS audits *cannot* be simply reprocessed. They require manual intervention within the DSS to perform corrective actions and explicitly submit the corrected data, as reprocessing would likely result in the same failure. - **Streamlining issue triage:** There's a need to improve the process for handling questions and issues reported via chat: - **Symptom vs. Root Cause:** Reports often only describe symptoms, requiring significant effort to diagnose the actual root cause. - **Historical vs. Current Issues:** Distinguishing between failures in invoices processed *before* a fix was deployed (which won't be resolved by the fix) versus failures occurring *after* the fix is crucial but often missed. - **Proposed solution:** Assigning responsibility for initial triage, investigation (potentially involving BSS/reports), and JIRA ticket creation to a dedicated role (e.g., Faisal working with Afton) to free up engineering resources (Sheikh) for core development tasks. Rachel is currently the primary JIRA ticket creator. ### Long-Term Strategic Direction The overarching goal for the system's evolution was clarified: - **Disengage from legacy systems:** The primary objective is to establish a direct, isolated pathway for invoice data ingestion and delivery to UBM, completely bypassing the current legacy system. - **Drive efficiency:** Eliminating dependencies on the high-maintenance legacy infrastructure is expected to significantly improve overall processing efficiency and reliability. Successfully processing 70,000 invoices demonstrates the new system's fundamental scalability potential, barring external issues like those from Microsoft.
Review Ops Issues
## Summary ### Workload Distribution and Prioritization The team addressed a significant backlog of 9,000 items in the processing queue, with 5,000 categorized as prepaid. Work was strategically distributed to prioritize the oldest items and those due by September 10th. Specific assignments included: - **Aspen**: Handled exclusively by Robert. - **Renew**: Split between Daniel, Balo, and Kieran. - **JCO, Bleak Facial Wear, Equinox**: Managed by Karen. - **Victra, Westland, Altus, Property Management**: Assigned to Elizabeth. - **Stone Creek and other vending customers**: Covered by Bilal, who also handles Altus Management, Gopuff, and Hillsburg. - **MedXL**: Alerto is leading, with Megan assisting in mapping. Crystal is focused on mapping for Michael. A targeted list of ~300 items per person was issued for immediate action, with plans to address remaining enrollments post-September 10th on Monday. ### Payment Processing and Reserve Fund Challenges Critical issues regarding payment processing, particularly for emergency payments, were discussed: - **Reserve Fund Necessity**: Emergency payments require sufficient reserve funds in the customer's FBO account to be processed immediately via PayClarity. Without reserves, payments must wait for fund requests, causing delays. - **Customer-Specific Issues**: Stone Creek lacks reserve funds and shows no intent to establish them. Bayshore faces complications due to managing 1,317 individual FBO accounts per property, making it difficult to leverage existing funds for new payments. - **Payment Cancellation Protocol**: Canceling payments in "funding" status requires contacting PayClarity to refund incoming funds. For issued checks not yet cashed, PayClarity must cancel them directly to recover funds. Canceling via the app alone is insufficient. - **Balance Tracking**: The "available funds" amount in PayClarity should be monitored for real-time accuracy, as other balance metrics (e.g., "cash balance") may not reflect processing statuses correctly. ### Client Onboarding and Configuration Updates Key updates on client implementations and configurations: - **Michael**: The go-live date was moved to October 21st. Work continues, but the vendor file has not been sent, and setup bills are incomplete. - **Cedar Street & Pacific Capital**: A requested "offset" attribute (potentially for bank codes) can be left blank per Kate's instruction, as the client may not utilize it. If issues arise later, values can be populated. - **File Formatting Caution**: Assumptions that new clients need identical file formats to existing ones were challenged. Clients often use the same systems (e.g., Intrada, Yardi) differently or omit fields. Confirming exact requirements directly with the client *before* development is essential to avoid rework. ### System Issues and Queue Management Persistent technical issues impacting workflow efficiency were highlighted: - **Data Reprocessing Glitch**: Bills already processed and submitted to customers are inexplicably reappearing in the "ready to send" or "in-process" status within DSS. This is inflating queue numbers (e.g., prepaid queue showed 3,631 items, mostly reprocessed) and causing significant rework. This is the top priority for the development team. - **Current Queue Metrics**: - Prepay Queue: 3,631 items (2,791 marked "ready to send," 264 duplicates). - UEM Queue: 1,038 items (887 "ready to send," 78 needing deletion). - Other Items: 6,274 items (5,177 "ready to send," needing fixes). - **PPG Enrollment Follow-up**: Incomplete actions were noted in the Smartsheet for PPG enrollments. Many portal entries lacked credentials or had unresolved messages from Crystal. CSMs and Rachel were tagged for follow-up on accounts missing enrollment despite having credentials or needing historical data pulls. ### MedXL Contract and Credit Handling A specific issue regarding MedXL's contract amendment was resolved: - **Credit Method**: MedXL requested credit refunds via direct payment (wire/ACH) instead of applying them to future Constellation invoices. Applying the credit to the next month's invoice was confirmed as the standard and optimal process. An amendment is being processed, but delays are expected due to its non-standard nature.
PPG Discussion
## Summary ### Historical Invoice Data Retrieval Efforts are underway to retrieve PPG's historical invoice data from 2024 for all U.S. locations, with a focus on resolving gaps in the current dataset. - **Utility API limitations**: Utility API recently discontinued support for key utilities (e.g., Duke), complicating data retrieval. Alternative solutions like Arcadia are being explored but are not yet production-ready. - **Constellation as a fallback**: Data from Constellation (a supplier) is prioritized for extraction due to easier accessibility, while other utilities require manual intervention or alternative methods. - **Scope clarification**: The team confirmed PPG's request specifically targets invoices downloaded via portal setups, necessitating a list of accounts with active portal credentials. ### Invoice Processing and Missing Data Urgent action is required to process all available invoices by an upcoming Tuesday deadline, with specific issues identified in missing or incomplete data. - **Unloaded invoices**: Several invoices (emailed to the team as early as July 2024) remain unprocessed, including discrepancies where only gas invoices were loaded despite electricity/water data being sent. - **Data verification**: A supplier-specific report revealed gaps in loaded invoices, prompting an audit to locate missing entries (e.g., August invoices) within the system. - **Historical batch oversight**: Some invoices dated January-March 2025 were misfiled and require reclassification into a historical batch for proper processing. ### Reporting System Challenges Delays in report customization are impacting PPG's ability to track invoice status, with resource constraints exacerbating timelines. - **Timestamp field addition**: A QA process for adding invoice-load timestamps (critical for PPG's tracking) is underway and slated for deployment next week. - **Resource bottlenecks**: A separate "spread report" faces delays until mid-October due to overlapping priorities and limited developer bandwidth, compounded by team vacations. - **Workaround evaluation**: Options to accelerate reporting include reallocating resources from other projects (e.g., Medexl) or creating one-off solutions, though both risk timeline unpredictability. ### Strategic Workarounds and Vendor Collaboration The team is exploring partnerships and technical workarounds to address data gaps, particularly for utilities unsupported by existing tools. - **Utility API overlap analysis**: Initial checks identified partial coverage for utilities like Consumers Energy and PG&E, with plans to quantify how many accounts can be serviced. - **Arcadia trial access**: A proposal to request expedited access to Arcadia's platform (as a Utility API alternative) was discussed, leveraging the urgency to negotiate a short-term trial. - **Credential inventory**: A comprehensive list of PPG's portal credentials was reviewed to identify which accounts can be prioritized for manual data extraction. ### Client Expectations and Customization Trade-offs PPG's initial rejection of customized solutions has led to operational friction, highlighting misalignments in process design. - **Custom report reluctance**: PPG previously declined a tailored AP file, opting for standard reports-now causing delays in resolving invoice-tracking issues. - **Timeline pressures**: The client emphasized dissatisfaction with prolonged unresolved requests (e.g., historical data pending since May), stressing the need for accelerated solutions. - **System limitations**: The absence of a "bill-loaded date" field in existing reports requires backend development, introducing QA complexity and potential errors. ### Next Steps and Communication Protocol Immediate actions include auditing existing invoices, coordinating data pulls, and maintaining client transparency. - **Monday checkpoint**: A follow-up meeting is scheduled for Monday to assess progress before the Tuesday deadline. - **Data gap resolution**: Specific missing invoices will be investigated today to confirm receipt and loading status. - **Thread-based updates**: All communications and progress reports will be centralized in an email thread for visibility.
UBM Planning
## Summary ### Compliance Process Optimization Efforts are underway to streamline evidence collection and management for SOC1 compliance, particularly for UBM, to reduce manual overhead and transition toward automation where feasible. Current processes involve excessive manual intervention, consuming significant development resources ("story points") and causing last-minute scrambles across teams-including operations staff who often lack clarity on their responsibilities. Key initiatives include: - **Mapping existing evidence workflows:** Documenting current SOC1 evidence collection methods, folder structures (e.g., Google Drive with categories like *Info*, *Evidence*, *Systems*), and stakeholder touchpoints to identify automation opportunities by December. - **Evaluating automation tools:** Assessing solutions like Vanta’s SOC1 module (costing ~$5k) to automate evidence gathering, with plans to review last year’s evidence folders to determine which components can be automated versus those requiring ongoing manual effort (e.g., password policy audits). - **Resource realignment:** Considering hiring a junior compliance role to handle day-to-day evidence management, freeing senior resources for oversight, especially as compliance demands grow across areas like carbon accounting. ### Team Structure and Development Efficiency The structure and efficiency of the Cognizant development teams supporting UBM and carbon accounting are under review due to resource constraints and impending team changes. A senior full-stack UBM developer is transitioning out, necessitating backfilling, while current workflows show bottlenecks: - **Inefficient resource allocation:** Teams are siloed (11 for carbon accounting, 15 for UBM, plus a shared resource for DSS), with some developers overloaded while others lack capacity to address urgent issues ("fires"). Customization requests consume disproportionate effort despite being non-strategic. - **Transition planning:** Leadership prioritization responsibilities may shift from "Jay" to others by November, requiring accelerated familiarization with UBM’s prioritization processes and team dynamics. Attendance at release/story meetings is encouraged to identify redundancies and gaps, such as treating DSS as a "black box" rather than integrating it effectively. - **Addressing technical debt:** Customizations for individual clients are rampant, leading to unsustainable development cycles (e.g., simple reports taking months). A push is underway to catalog these customizations, explore productization of common features, and enforce stricter governance to prevent ad-hoc commitments. ### DSS System Challenges Operational issues within the Data Services (DSS) system are causing data processing delays and errors, compounded by visibility and access limitations: - **Processing failures:** Recent errors in batch processing (potentially linked to IPS-Invoice Processing System-changes) require reprocessing, with UBM often failing to handle DSS-generated CSVs correctly. Collaboration between DSS and UBM teams is needed to resolve validation issues and improve error messaging. - **Access constraints:** Critical monitoring views for OpenAI/DSS integrations are inaccessible to most team members, creating a single point of failure. Efforts are underway to secure access for at least two engineers to mitigate this risk. - **Leadership gaps:** The DSS team lacks dedicated leadership, with part-time oversight unable to provide adequate support, exacerbating response times for issues. ### Sales-Product Alignment Misalignment between sales commitments and product capabilities is driving unsustainable customization demands and implementation delays: - **Overpromising in sales:** Sales teams frequently commit to unvetted or out-of-scope features during client negotiations, leading to reactive, high-effort custom development that strains resources and delays core improvements. - **Corrective measures:** Implementing stricter controls, including diverting all technical inquiries to product/engineering for feasibility assessments before client commitments, and creating internal FAQs/scripted demos to standardize feature representations. - **Long-term strategy:** Exploring product changes to make common customizations self-serve or configurable by Customer Success, reducing the need for hard-coded solutions. ### Knowledge Transfer and Onboarding Accelerating domain expertise in UBM’s complex functionality is critical ahead of leadership transitions, with a focus on immersive learning: - **Targeted upskilling:** Conducting working sessions with subject-matter experts (e.g., on *locations*, *attributes*, *build blocks*, *virtual accounts*) to clarify UBM’s intricacies and differences from other systems like DSS. - **Prioritization trade-offs:** Initial focus on stabilizing DSS (due to active "fires") has delayed deep UBM immersion, but attendance at UBM incident resolutions is now prioritized to build context and address root causes of recurring issues.
Zendesk
## Summary ### MFA Authentication Challenges Persistent issues with Microsoft-managed MFA were discussed, including the inability to disable it despite prior assurances. - **Phone-based authentication workaround**: A solution involving assigning dedicated phone numbers for MFA codes was proposed, requiring immediate testing before implementation. - **Urgent coordination needed**: A request was made to email stakeholders for specific phone numbers assigned to individuals, emphasizing the need to resolve this before sign-off due to time zone differences (two hours behind). ### Staffing and Workflow Disruptions Alex C. from the UBM team resigned while covering for Ruben (on medical leave), creating operational gaps. - **Impact on critical projects**: His departure diverts focus from UBM’s core tickets, delaying SSO implementation and affecting dependencies like Simon and Medicine’s work. - **Extended transition**: A 30-day notice period further strains resources, compounding existing workflow challenges. ### Ticketing System Implementation A centralized ticketing system (e.g., Zendesk, HubSpot, or Jira) was proposed to replace fragmented email chains for issue tracking. - **Key requirements**: - **Internal assignment**: Enable CSMs, UBM Ops, and Data Services to assign/review tickets, ensuring visibility and accountability. - **Integration with CRM**: HubSpot’s native ticketing could leverage existing customer data (e.g., contract terms, product tiers) for prioritization. - **External branding caution**: Jira was suggested but requires validation that "Arise" branding isn’t exposed to customers to avoid legal/regulatory risks. - **Adoption strategy**: Start internally with CSMs manually creating tickets to refine workflows before involving customers. Mary Foster’s prior Salesforce ticketing experience was noted as a resource. ### Portal Access and Authenticator Management Critical vulnerabilities in portal access workflows were highlighted, particularly for web download teams. - **Cell phone dependency**: Reliance on personal devices for MFA codes creates operational risks (e.g., staff departures, vacations). - **Centralized solution**: Proposed use of digital phone numbers linked to Teams/Slack or web portals for code retrieval, enabling scalability and shift coverage. - **Expanding urgency**: BGE’s new authentication requirements exacerbate the issue, preventing task delegation to external teams (e.g., Mexico) due to current setup limitations. ### Data Workflow Inefficiencies Fragmented systems (e.g., FDG-NoTab disconnect) and inconsistent processes hinder productivity. - **Daily productivity loss**: ~15 minutes daily spent switching contexts (e.g., incognito browsers) to access necessary data. - **Onboarding chaos**: Multiple conflicting onboarding sheets for clients like MedExcel and Victra create confusion, requiring immediate reconciliation.
GTCX MFA Discussion
## Summary ### MFA Access Challenge for Operators Operators face difficulties accessing systems due to mandatory MFA requirements incompatible with their no-cellphone office environment. The built-in MFA solution is unworkable because operators cannot use personal devices for authentication during work. ### Proposed Solutions Three primary solutions were evaluated to resolve the MFA access barrier: - **Yubikey Hardware Integration**: Leveraging existing Yubikeys (hardware security keys) as an alternative MFA method. Operators already possess these devices, and integration with Microsoft Entra ID is feasible. Setup involves linking each key to individual user accounts during initial login. - **Phone Call Authentication**: Utilizing landline-based verification where operators receive automated calls with codes. Each operator has a dedicated phone line, making this a viable fallback if hardware integration faces delays. - **Temporary MFA Disabling**: As an urgent interim measure, MFA will be deactivated for affected users to unblock immediate access. This is considered a last resort due to security risks but necessary to avoid operational delays. ### Implementation Strategy A phased approach was agreed upon to balance urgency and security: 1. **Immediate Action**: - MFA will be disabled today for all affected operators using a provided user list. - Operators will test system access (including FDG and Microsoft Teams) to confirm functionality. 2. **Post-Holiday Follow-Up**: - Yubikey integration will be prioritized next week after key personnel return. - Configuration requires testing with physical Yubikeys and Azure Entra ID setup. 3. **Contingency Planning**: - If Yubikey deployment stalls, phone-call authentication will be activated as a backup. ### Next Steps - The user list for MFA disabling will be shared immediately. - Post-testing validation will confirm whether landline-based MFA or Yubikeys are sustainable long-term solutions. - A follow-up meeting is scheduled for early next week to review Yubikey progress and re-enable MFA.
DSS Check-in
## Summary The discussion focused on current workstreams and operational challenges, primarily around invoice processing systems and backlog management. Key topics included reprocessing historic PPG Industries bills due to LLM bug fixes, addressing bottlenecks in invoice processing workflows, and migrating project tracking to a new Jira instance. There was emphasis on improving data accuracy through model refinements and creating scalable solutions for recurring issues like failed invoice reprocessing. The conversation also covered ticket prioritization, with several outdated Azure DevOps items being reviewed for relevance or closure. ## Wins - Successful identification of a method to reprocess historic PPG Industries bills (3,800 records) through database updates - Existing script developed for automatic reprocessing of failed invoices (though currently paused) - Progress on DSS UI updates for user activity reporting and build status visibility ## Issues - Manual database updates required for PPG Industries bill reprocessing due to system limitations - No automated retry mechanism for failed invoice processing (IPS status failures) - Lack of bulk reprocessing capability in DSS UI for failed invoices - Throttling issues in invoice processing system causing recurrent failures - Missing documentation for FDC Connect local development environment setup - Outdated/irrelevant tickets cluttering Azure DevOps (e.g., BDE accounts file lookup, SMB processing tickets) - Access limitations to BDE team's database preventing client matching reports ## Commitments - Reprocess 3,800 PPG Industries bills through manual database updates (Team Member) - Implement user activity report updates and DSS build status visibility enhancements (Team Member) - Create tickets for automated invoice retry mechanism and bulk reprocessing UI feature (Manager) - Migrate all active work items from Azure DevOps to Jira within the week (Manager) - Develop documentation for FDC Connect local environment setup (Team Member) - Review and clean up outdated Azure DevOps tickets (Manager)
Follow-up with Navigator & Microsoft Team
## Summary ### Meeting Purpose and Scheduling The primary focus was to address potential blockers and provide guidance for an immigration-related project involving Microsoft. A follow-up meeting was proposed for the following weeks, though immediate communication via email was encouraged for urgent issues. - **Scheduling follow-up**: A team sync was suggested for the next week or the week after, but flexibility was emphasized based on project needs. - **Purpose clarification**: The meeting aimed to troubleshoot challenges, share best practices, and ensure smooth progress, particularly around technical integrations. - -- ### Progress on Technical Implementation Significant progress was reported in resolving a critical pain point related to PostgreSQL and Application Gateway configurations. - **PostgreSQL and Application Gateway integration**: - Successfully tested TCP TLS connectivity between Application Gateway and PostgreSQL, enabling secure database access. - This resolved prior challenges in establishing reliable connections, marking a key milestone in the project. - -- ### Communication and Support Channels The team emphasized asynchronous communication to maintain momentum and avoid delays. - **Email as primary contact**: Immediate issues or queries should be directed via email for rapid response, bypassing the need to wait for scheduled meetings. - **Ad-hoc support availability**: Microsoft’s team offered to review technical configurations or provide guidance on-demand, ensuring continuous progress. - -- ### Next Steps and Flexibility While no active blockers were reported, the framework for future collaboration was reinforced. - **Upcoming syncs**: A tentative meeting in the coming weeks was mentioned, but the focus remained on leveraging email for real-time problem-solving. - **Proactive issue escalation**: Encouraged raising concerns early to preempt delays, even if formal meetings aren’t immediately available.
Arcadia Migration
## Meeting Purpose and Rescheduling The meeting aimed to address technical challenges in migrating infrastructure to Azure with Microsoft's support, but key personnel absence necessitated rescheduling. Despite attempts to accommodate schedules, critical stakeholders were unavailable, requiring postponement by approximately 10 days. ## AKS Cluster Upgrade Permissions Discussions centered on minimizing permissions for Kubernetes version upgrades in Azure Kubernetes Service (AKS). Key insights: - **Granular access requirements**: Upgrading AKS clusters requires control-plane contributor permissions, not just data-plane access. - **Custom role solution**: While no built-in "upgrade-only" role exists, creating custom Azure roles is feasible to restrict permissions strictly to cluster upgrades. - **Security trade-offs**: Contributor access remains common, but custom roles are recommended for high-security environments like DMZ-hosted clusters handling public internet traffic. ## AKS Upgrade Best Practices Best practices for managing AKS cluster updates were clarified: - **Automatic vs. manual upgrades**: Automatic upgrades (now generally available) represent Microsoft's recommended standard, while manual upgrades offer control for specific maintenance windows. - **Maintenance windows**: For user-facing applications, upgrades should align with predefined outage periods to minimize disruption. - **Version management**: Administrators can define versioning policies and maintenance schedules via Kubernetes release channels. ## Migration Progress and Technical Hurdles Current migration efforts face connectivity issues: - **App Gateway bottlenecks**: Challenges persist in routing traffic through Azure Application Gateway to internal AKS load balancers. - **Architecture specifics**: The design uses a public-facing Application Gateway routing to an internal load balancer in a trusted network zone, without AGIC (Application Gateway Ingress Controller). - **Certificate complications**: Earlier certificate-related problems hindered implementation progress. ## Security Considerations for Public-Facing Clusters Special precautions were emphasized for AKS clusters exposed to the internet: - **Cluster hardening**: Follow Microsoft's automatic upgrade configuration as a security benchmark for standard-mode clusters. - **Validation necessity**: Avoid automatic deployment until design validation completes, especially for critical-path infrastructure. ## Next Steps for Issue Resolution Collaborative troubleshooting plans were outlined: - **Pre-meeting preparation**: Detailed issue descriptions (e.g., App Gateway connectivity errors) will be shared via email to enable targeted support. - **Specialized expertise**: Microsoft will assess whether niche technical knowledge is required for unresolved problems. - **Rescheduled session**: The postponed meeting will prioritize architecture reviews and migration efficiency optimizations.
Export File Issues
## Payment File Requirements and Format Discussion The meeting focused on determining the technical requirements for sharing payment file information, specifically the 819 file format, to align with existing processes. Key points included: - **File format clarification**: The current process involves receiving a text-based 819 file from an NG application, processed through Sterling before reaching SAP. There was confusion about whether to use a standard AP file (CSV) or a custom text file, with the latter requiring significant development work. - **Mapping alignment**: A sample file mapping was shared to ensure field consistency (e.g., "billing period" vs. "service period"). Critical gaps were identified, such as undefined mandatory fields and formatting rules (e.g., date formats, field lengths). - **Delivery mechanism**: Files are dropped via SFTP multiple times daily, but the interface’s technical specifications (e.g., positional text vs. CSV) remained unresolved, pending confirmation from stakeholders. - -- ## Implementation Timelines and Options Development timelines vary drastically based on the chosen file approach: - **Standard AP file (CSV)**: - Zero development effort; can be enabled immediately. - Custom CSV requires 10-15 days for field mapping configuration. - **Custom text file (positional)**: - Requires building a new adapter from scratch due to deprecated legacy support. - Estimated delivery in December, contingent on mid-October development start and timely testing feedback. - **Testing dependencies**: Delays risk impacting SAP’s parallel build and quality assurance cycles, as their testing requires the finalized file format. - -- ## Field Mapping and Definition Challenges Detailed mapping discussions revealed unresolved data definition issues: - **Critical ambiguities**: - **Company codes** (e.g., "0400" for East Rockies) lack clear ties to location attributes in onboarding templates. - **Invoice numbers** are not consistently mandatory; gaps exist for scenarios without utility invoices (e.g., using account IDs as substitutes). - **Date formats and field lengths** (e.g., 8-character state fields) require explicit validation. - **Onboarding impact**: Vendor attributes (e.g., vendor codes) and hard-coded values (e.g., "GRN") need integration into client onboarding processes, necessitating template updates. - -- ## Reporting and Metrics for Document Processing A separate discussion addressed inaccuracies in tracking document downloads and processing: - **Current reporting gaps**: - Metrics fail to capture snoozed invoices, manual email processing, and files uploaded via File Exchange/Upload Folder (attributed to "File Watcher" instead of users). - Discrepancies were noted between manual logs and automated reports (e.g., 309 vs. 22 tracked downloads for one user). - **Solutions proposed**: - Add snooze tracking with vendor/client details to validate workload and resolve disputes. - Manual tracking for email/File Exchange activities until system fixes are implemented. - **KPI implications**: Performance reports exclude Download Helper actions, undervaluing user contributions. - -- ## Technical Issues with File Processing Critical system flaws affecting workflow efficiency were highlighted: - **Download Helper instability**: - Freezes and forced refreshes cause lost progress during bulk snoozing/downloading. - **Duplicate processing**: - Files uploaded via Download Helper are relabeled to "File Watcher" in DSS, causing tasks to reappear and triggering redundant downloads. - This results in duplicate records requiring manual cleanup and inflates workload metrics. - **Priority fixes needed**: - Align DSS records with original filenames/uploaders. - Prevent task reappearance post-upload to eliminate redundant work.
DSS Updates Review
## UBM Export Functionality Implementation A new feature was developed to create zip files containing PDF invoices and JSON outputs for each processed bill, then push them to an FTP server for the UBM team. This integration into the DSS application replaces a legacy service that had reliability issues, centralizing the process for easier maintenance and tracking. Key details include: - **Purpose**: The UBM team uses the JSON to validate extracted data (e.g., commodities, charges) against the original PDF, ensuring accuracy before further processing. - **Technical workflow**: After IPS (Intelligent Processing Service) completes bill processing, DSS generates the zip (named by bill ID), combining the JSON output and PDF, then transmits it via FTP. - **Current status**: Development is complete but awaits testing by the UBM team, specifically a developer named Alexandru Mitches, to confirm functionality before deployment. ## DSS Status Visibility Enhancement The team prioritized adding workflow and build status columns to the DSS UI to resolve confusion about bill progression stages, which previously led to redundant reprocessing. This change will display granular statuses like "waiting for operator" or "in process" alongside workflow IDs (e.g., data acquisition, output data). Critical insights: - **Problem background**: Bills marked as "failed" in DSS could actually be completed in IPS, causing operators to unnecessarily reprocess them. - **Solution design**: Two new columns will show: - *Workflow status*: Indicates the current stage (e.g., DSS IPS, duplicate resolution). - *Build status*: Reflects real-time state (e.g., "in process," "waiting for system"). - **Next steps**: This is a short-term fix; long-term improvements require deeper workflow documentation and potential automation (e.g., auto-hiding completed bills). ## System Architecture and Knowledge Transfer Discussions highlighted gaps in understanding the IPS system’s role, prompting plans for a dedicated session to document its integration with DSS. IPS processes bills using LLMs after DSS initializes records, but knowledge is siloed with its original creator. Key points: - **IPS function**: Acts as an LLM-driven layer within DSS that extracts and structures bill data, then returns results to DSS for further handling. - **Knowledge sharing**: A session will be scheduled to transfer expertise from the IPS creator to the broader team, reducing dependency risks and improving troubleshooting. - **Repository structure**: DSS and IPS codebases reside in separate Azure DevOps repos (FDG Connect DSS and IPS), with UBM operating as an external team. ## Testing and Quality Assurance Processes Concerns were raised about the lack of structured testing for DSS features, relying on UBM or product teams (e.g., Rachel, Afton) for validation instead of internal QA. This approach risks overlooking issues pre-deployment. Highlights include: - **Current gap**: No dedicated QA resources exist for DSS; testing is ad hoc, performed by developers or end-users during acceptance. - **Export feature example**: The UBM team will test the new zip functionality, but a formal internal testing protocol is needed to catch errors early. - **Broader impact**: Without systematic checks, errors (e.g., incorrect status flags) can cause operational inefficiencies, like duplicated work. ## OpenAI Processing Delays Persistent delays in OpenAI processing have created a backlog of unprocessed bills from September 12-17, slowing overall system throughput. The team is monitoring and escalating the issue. Details covered: - **Current status**: Bills from September 10-11 were mostly processed, but September 12 onward remain queued, with responses trickling in slowly. - **Action plan**: - Escalating to Microsoft (via Macha) to address API bottlenecks. - Coordinating with Sunny and Gaurav for updates, with progress shared in team chats. - **Impact**: Delays hinder downstream UBM validation and customer-facing outputs, requiring manual tracking.
[EXTERNAL]Story Walkthrough
## Emergency Payment System Updates The emergency payment button must be disabled if any prior fund request (standard/custom AP file, PC file, or emergency payment) exists for a bill. This prevents duplicate payments and ensures the button is only active when no pending or processed funding actions are underway. ### Separate AP File Generation for Emergency Payments A key solution involves creating distinct AP files for emergency payments versus standard funding requests to resolve reconciliation mismatches with Payment Clearing (PC) systems. - **Problem**: Combined AP files (emergency + standard payments) cause funding request mismatches because PC sends separate requests per file type. - **Solution**: Generate individual AP files per emergency payment (or a consolidated emergency file) to align 1:1 with PC funding requests. - **Implementation**: Starting with customer Victra as a pilot, leveraging existing logic (similar to Aspen's "excluded file" approach). Requires per-customer code changes due to unique AP file configurations. ## Reconciliation and Reporting Enhancements ### Emergency Payment Exceptions Tab Columns for processing fee and total paid amount will be added, with data pulled from reconciled Paycleary information. - **Processing Fee**: Displayed even if $0 (replacing dashes) to provide full transparency. - **Total Paid Amount**: Reflects the actual disbursed amount, sourced from reconciliation data. - **Edge Cases**: If no processing fee exists (e.g., pre-AP file generation), dashes remain temporarily. ### Bill Pay Reconciliation Logic Reconciliation data will include batch IDs and cleared timestamps per line item, with refined amount display rules. - **Batch IDs**: Added to track individual payments; labels hidden if IDs are unavailable (e.g., older bills). - **Amount Display**: - *Normal Payments*: Show net amount (excluding fees paid by Constellation). - *Emergency Payments*: Show total amount (including customer-incurred processing fees). - **Cleared Timestamps**: Included for check/ECheck payments to confirm settlement dates. ## User Management and Activation Flow Non-SSO user onboarding will be streamlined by revising status transitions and email verification. - **Pending Status**: Users marked as "Pending" post-invitation until email verification. - **CSM Visibility**: Customer Success Managers can see "Pending" status and resend verification emails. - **Post-Verification**: Status shifts to "Active," enabling password resets and standard editing. ## System and Process Improvements ### AP File Creation Simplification Database constraints on payment file types will be removed, and customer-specific payment file configurations will be editable via UI. - **Benefit**: Avoids manual database updates and increases transparency for customer setups. ### Validation Error Handling Error codes 2015 and 3020 will be addressed to reduce manual resolution efforts. - **Error 2015**: Treated as unresolvable (similar to 2016) since it flags minor discrepancies (<$1 in bill subtotals). - **Error 3020**: Auto-resolved for existing and new bills to minimize backlog. ## Customer-Specific Initiatives ### Sample AP File Generation Urgent sample files are needed for customer Extension to demonstrate AP file structure and cost allocation reporting ahead of a Tableau integration discussion. Kobea’s sample file is lower priority and deferred to the next sprint. ### AltaFiber History Load A low-priority task to fill billing data gaps using provided Excel files after PDFs are processed via DSDS. May utilize mock bill workflows if feasible. ## Strategic Priorities ### JSON File Ingestion Migration to JSON-based data ingestion is a high-priority initiative to replace legacy processes, with testing in lower environments underway.
Daily Progress Meeting
## Summary ### Third-Party Authentication Challenge A third-party downloading team faces restrictions on using cell phones for authentication, requiring an alternative solution. - **Collaboration initiated**: Sunny and Faisal will spearhead discussions with the team to identify a viable authentication method compatible with their constraints. - **Existing capability noted**: The team confirmed successful integration with other suppliers under Constellation, suggesting feasibility despite the phone restriction. - **Technical gap**: The solution involves a physical device plugged into computers, but specifics require further technical alignment with internal systems. ### Queue Metrics and Processing Issues Current prepay queues show significant volumes, primarily due to data integrity and mapping errors. - **5,325 prepay items in queue**: Half (2,500) are flagged for integrity checks stemming from recurring issues: - **Mapping errors**: Incorrect account/meter numbers, service types (e.g., distribution vs. full service), or block configuration mismatches. - **Validation discrepancies**: Bills failing "total charges not matching" checks split between data verification and integrity checks, inflating volumes. - **Data Services (DS) vs. UBM disconnect**: Investigations reveal inconsistencies between DS outputs and UBM expectations, prompting ticket creation for deeper analysis. - **Prioritization workaround**: Limited resources necessitate parking tasks in lists for team members, bypassing standard workflow protocols. ### Enrollment and Onboarding Updates Progress across multiple clients faces hurdles like missing credentials and billing documents. - **SNS**: Additional setup bills underway; onboarding documentation access issues being resolved. - **Qualdrome**: Mapping completed for Phase 5, but 21 accounts lack bills and 18 lack valid credentials. - **Jayco**: Awaiting response for one outstanding bill. - **Medexcel**: Alberto processing bills with available PDFs; outreach phase starts next week. - **Michaels**: 12 data services pending completion before mapping. - **Pacific Capital**: On track for 24th go-live, but credential setup remains unconfirmed. ### Workload Management and Resource Allocation Teams are reallocating resources to address backlogs and prioritize critical tasks. - **Postpay bottleneck**: 4,046 of 5,262 items in "ready to send" need manual fixes or reprocessing. - **Resource shifts**: Maria moving from mail duties to data processing to bolster capacity. - **Mock bill prioritization**: Non-Banyan bills cleared first, with rebates team assisting lab bill processing. - **Urgent tasks**: - Wesleyan Eversource bills: Compiling total dues into spreadsheets for mock bill creation. - JCO credential setup: Concurrently managed alongside other high-priority items. ### Credential Management Process Systemic credential delays are impacting timelines, driving process refinements. - **Onboarding requirement**: Credentials must be provided upfront in onboarding documents to prevent bottlenecks. - **Documentation enforcement**: LOAs (Letters of Authorization) must be attached to Smartsheets for centralized access during utility outreach. - **Client-specific challenges**: - **LA College**: 711 accounts transitioning from mail to portal; client handling initial setup where possible. - **Munson**: Profile fragmentation (20+ logins per vendor) complicates credential management. ### Upcoming Client Implementations MedXL onboarding demands heightened coordination due to scale and complexity. - **Training preparation**: Session scheduled to align teams on requirements. - **Progress tracking**: - Non-EDI bills processed in DS but require UBM mapping. - EDI bill extraction initiated for accounts with accessible PDFs. - **Cross-functional sync**: Weekly MedXL meetings mandated to streamline credential linking, mapping, and live-date readiness.
Laptop Setup
## Summary ### Meeting Context and Opening Remarks The meeting began with an attempt to establish context, though the opening remarks were incomplete and cut off mid-sentence. The speaker initiated a greeting ("Hey, Sheikh") and started framing background information ("Just for some context, maybe that's helpful for"), but no substantive details were provided before the transcript ended. - **Incomplete contextual setup**: The speaker appeared poised to share foundational information relevant to the discussion, but the thought remained unfinished, leaving the purpose and scope of the meeting unclear. - **Lack of resolved topics**: Due to the abrupt ending of the transcript, no actionable subjects, decisions, or key themes were explored or documented. *Note: The transcript provided contains only a fragment of the meeting’s introduction, with no further dialogue or content available for analysis. As a result, this summary cannot detail specific agenda items, outcomes, or discussions.*
[EXTERNAL]Updated invitation: Ramp / Constellation @ Tue Sep 16, 2025 1:30pm - 2pm (EDT) (faisal.alahmadi@constellation.com)
### Prospect The prospect is developing a utility bill management platform focused on streamlining payment processing for business customers. Their solution requires handling payments for multiple vendors across multiple customers (many-to-many relationships), with an emphasis on utility bill payments. Key technical requirements include API-driven integrations for sending payment instructions and vendor details, as well as support for diverse payment methods to accommodate vendor preferences. The team is evaluating Ramp's capabilities to determine compatibility with their platform's payment execution workflow. ### Company The company operates in the utility bill management sector, providing a centralized platform for businesses to manage and pay utility invoices. Their service involves collecting bills (e.g., via PDF/email), extracting data, coding transactions, and executing payments through optimal methods. Current payment processing prioritizes credit cards, followed by ACH, eCheck, and physical checks. They seek a partner to handle the payment execution component, particularly for utility vendors with specific payment requirements (e.g., eCheck-only acceptance). The platform serves multiple business customers simultaneously, necessitating scalable payment orchestration. ### Priorities - **API Integration**: Ability to programmatically send payment instructions, vendor details, and invoice data to Ramp for execution, bypassing manual uploads. - **Payment Method Flexibility**: Support for credit cards, ACH, eCheck, and physical checks to accommodate utility vendors' constraints. - **Many-to-Many Relationships**: Managing payments for numerous vendors across multiple customer accounts within a unified system. - **Utility-Specific Workflows**: Solutions for handling utility bills (e.g., eCheck requirements) and reconciling payments. - **Partner Economics**: Understanding revenue-sharing models based on processed spend volume and commercial terms for API-based partnerships. - **Technical Validation**: Access to sandbox environments and API documentation to test integration feasibility.
DAG/Navigator Discussion
## Overview of DAG Architecture and Data Flow The Data Access Gateway (DAG) employs a medallion architecture with three layers: bronze (raw data), silver (refined data for CP), and gold (reporting/analytics). Approximately 80-90% of data for the Customer Experience Platform (CP)-which handles customer power/gas accounts and billing-flows through DAG's silver layer. Key data sources include Genesis (power/gas), Pinnacle (commercial gas), Salesforce, Constellation Home, and other specialized systems. - **Data hierarchy**: Bronze ingests raw data from diverse sources, silver refines it for CP consumption, and gold supports analytics with partial coverage of critical tables like invoices. - **CP dependency**: CP relies heavily on DAG-processed data for operational functions, though some data is sourced directly from origin systems. ## Invoice Data Provisioning and PDF Bill Requirements DAG currently provides CP with structured invoice data (e.g., invoice numbers, dates, volumes, costs) across all layers. A critical gap exists in accessing actual PDF bills archived in Laserfish, which are essential for customer-facing displays and third-party sustainability audits. - **Structured data coverage**: Invoice tables in bronze/silver layers include fields like customer numbers and payment details but lack direct PDF access. - **Laserfish integration**: PDFs reside in an Azure storage container via Laserfish, with document IDs available in DAG's bronze/silver invoice tables. Retrieving PDFs requires calling Laserfish's API using these IDs. ## Approach for PDF and Data Integration Two parallel paths were proposed: leveraging existing DAG data layers for structured fields and integrating Laserfish links for PDF retrieval. A proof-of-concept (POC) will validate feasibility. - **POC scope**: Test with 3-5 customer accounts to map DAG's bronze/silver fields to CP's XML schema and confirm Laserfish document ID retrieval. - **Data mapping strategy**: Compare DAG's flat structures with CP's nested XML format to ensure compatibility, avoiding redundant normalization efforts. - **Laserfish access**: If document IDs are insufficient, explore direct database access via SMEs (Luis for IT, Chris Hayward for business logic). ## Data Access Constraints and Compute Model DAG exposes data through its layers but lacks APIs for on-demand queries. Downstream teams must use their own compute resources (e.g., Databricks) to access bronze/silver tables. - **Access model**: External applications query DAG's tables directly without data replication, using external tables or compute clusters. - **Limitation**: Heavy processing (e.g., billion-row analytics) requires custom pipelines, as DAG doesn’t support dynamic API-based extraction. ## Next Steps for Implementation Immediate actions include finalizing Databricks token reissuance for layer access and coordinating customer-specific data samples. - **Token resolution**: Address subscription-tier issues blocking Databricks connectivity to DAG layers. - **Customer samples**: Share specific account/commodity/month combinations with the DAG team to test field coverage and Laserfish ID retrieval. - **Laserfish exploration**: Initiate discussions with Laserfish SMEs if API/document access proves inadequate during POC.
PowerBI limit issues
## Migration from Power BI Premium to Fabric Capacity The meeting centered on transitioning critical Power BI reports from shared Power BI Premium capacity to a dedicated Microsoft Fabric capacity. This shift is necessary because Microsoft is phasing out Power BI Premium in favor of Fabric, with timelines ranging from May 2025 to 2027. Fabric offers advantages like granular scalability (capacity can be resized instantly during emergencies) and cost efficiency through smaller-scale purchasing. The urgency stems from recent performance issues caused by uncontrolled resource consumption in the shared "wild west" Premium environment, where one team’s actions could disrupt externally facing reports. ### Migration Challenges and Strategy A dedicated Fabric capacity will isolate critical reports to prevent cross-team disruptions. However, migration is complex: - **Large semantic model limitation**: Existing workspaces use "large model storage," which blocks direct migration. A new workspace must be created in Fabric, requiring manual or CLI-assisted transfer of semantic models and reports. - **Migration approach**: The Fabric Command Line Interface (CLI) could automate copying objects between workspaces, though its reliability for full migrations is untested. A small-scale trial is recommended before full migration. - **Timeline flexibility**: Immediate risk is mitigated by redistributing workloads to underutilized IT capacity, but migration should proceed methodically to avoid future disruptions. ### Technical and Operational Considerations Key factors for the Fabric transition include: - **Deployment pipelines**: Compatibility with existing multi-environment workflows (Dev, QA, Pre-Prod, Prod) is unconfirmed. Testing is essential to ensure Fabric supports pipeline-driven promotions between workspaces. - **Embedded reporting**: Power BI Embedded functionality must be validated in Fabric, as external-facing reports are business-critical. If issues arise, escalation to Microsoft will be necessary. - **Tenant alignment**: All resources reside under the single Constellation tenant. Future Azure resource group changes (e.g., for Constellation Navigator) won’t impact Fabric if workspaces use "small model storage," enabling easy capacity transfers. ### Capacity Setup and Management Two Fabric deployment models were evaluated: - **Managed capacity (recommended)**: Hosted in a shared IT resource group, this offers built-in support from internal teams for troubleshooting and Microsoft ticket escalation, reducing operational overhead. - **Self-managed capacity**: Requires dedicated Azure expertise to handle infrastructure issues independently, which is impractical for a single capacity without specialized skills. Admins will control workspace assignments, monitor usage metrics, and scale capacity proactively. "Small model storage" is advised for future flexibility, as switching to large models is possible, but the reverse is not. ### Next Steps and Governance A Fabric capacity will be provisioned via a ServiceNow ticket, specifying admins (e.g., Jay or Sergio) to manage workspaces and permissions. Post-creation, a walkthrough will cover: - Admin controls for adding workspaces/users. - Monitoring tools to track capacity utilization and preempt bottlenecks. - Cost visibility, though initial billing may roll into a shared budget. This isolation will simplify root-cause analysis during incidents, contrasting with the current opaque shared environment.
DSS/UBM sync
## Summary The discussion centered on introductions and strategic direction for the UBM and DSS platforms. Faisal, newly joining Constellation, outlined his background in product management and immediate focus on stabilizing DSS over the next 3-4 months while addressing technical debt. Key challenges include scalability limitations in UBM, fragmented communication between UBM and data ingestion teams, and over-reliance on custom solutions for client payments. The long-term vision involves modularizing UBM to reduce hard-coded customizations, improving cross-team collaboration, and enabling self-serve features to minimize engineering overhead. ## Wins - **Payment Processing Templates**: Initial progress on standardizing payment files by creating reusable templates for common processors (e.g., Appfolio), reducing redundant development for similar client requests. - **Technical Leadership**: Successful delegation of technical tasks within the UBM engineering team to balance workloads and enhance efficiency. ## Issues - **Scalability Risks**: UBM’s architecture cannot handle projected customer growth due to technical debt, non-scalable processes, and manual interventions. - **Siloed Operations**: Critical gaps in communication and understanding between UBM and data ingestion (DSS) teams, leading to reactive problem-solving. - **Customization Overload**: Excessive time spent building unique payment files per client instead of scalable, configurable solutions. - **Documentation Gaps**: Lack of standardized processes and clear documentation hinders cross-team alignment and proactive improvements. ## Commitments - **Modular UBM Architecture**: Transitioning toward a templated, self-serve model for client configurations (e.g., payments) to reduce custom code. Owner: Engineering/Product Teams. - **Cross-Team Collaboration Framework**: Establishing structured communication channels between UBM and DSS to align on requirements and scalability efforts. Owner: Product Leadership. - **Technical Debt Addressal**: Prioritizing foundational fixes in DSS over the next 3-4 months to enable future platform enhancements. Owner: Product/Engineering.
DSS-UBM Alignment
## DSS Priorities and Challenges The meeting focused on critical priorities for the Data Services System (DSS), particularly the urgency to remove the toggle bypassing DSS in favor of the legacy system. While this change is deemed necessary, several complexities emerged: - **Legacy system exceptions**: Specific use cases (e.g., summary bills exceeding six pages, non-USD currency invoices) require continued routing through the legacy system (DDAS), but the implementation timeline remains uncertain due to unresolved technical dependencies. - **Conflicting priorities**: Multiple stakeholders (e.g., Abhinav, Terra) have divergent views on solution ownership, leading to competing backlogs and operational friction. - **Stability concerns**: Temporary fixes are creating a "house of cards" scenario, with unaddressed underlying issues risking future system failures. - -- ## Backlog and Operational Issues Operational inefficiencies across systems are causing significant customer-impacting delays, with backlogs stemming from siloed workflows and validation errors: - **Validation bottlenecks**: Over 50 customers face payment delays due to easily resolvable validation errors (e.g., account numbers with dashes triggering rejections in UBM), requiring manual admin intervention instead of automated resolutions. - **Siloed tracking**: Disconnected systems (DSS, UBM, data services) prevent end-to-end visibility. For example: - CSMs lack consolidated dashboards to track customer bill statuses across audit stages, delaying issue resolution. - UBM rejections due to formatting errors (e.g., missing fields) force reprocessing loops instead of real-time error feedback. - **Resource strain**: Data operators are overwhelmed by manual downloads and QA tasks, reducing capacity for strategic improvements. - -- ## Systemic Process and Communication Gaps Cross-functional misalignment is exacerbating operational challenges, with critical gaps in process ownership and issue resolution: - **"No man’s land" issues**: Problems falling between teams (e.g., validation logic, batch upload failures) remain unaddressed, causing downstream backlogs and customer dissatisfaction. - **Inadequate prioritization**: Teams focus on clearing individual backlogs rather than systemic solutions, perpetuating firefighting cycles. For instance: - UBM’s practice of rejecting entire batches for single errors (vs. itemized error reports) increases rework. - Siloed trackers (e.g., Power BI, HubSpot) lack integration, forcing manual cross-referencing to diagnose customer issues. - **Communication breakdown**: Absence of a unified issue-tracking system leads to blame-shifting, with teams like DSS and UBM operating in isolation. - -- ## Technical System Issues Specific technical constraints across platforms are hindering automation and scalability: - **DSS limitations**: - Non-USD invoices and summary bills (>6 pages) fail processing, requiring legacy system fallbacks. - Performance degradation in DDAS interfaces due to unoptimized database queries, necessitating indexing or server restarts. - **UBM batch processing**: - Ingesting Constellation bill data requires mimicking legacy formats, delaying automated pathways. - Lack of fuzzy logic for validation (e.g., auto-correcting account number formats) forces manual fixes. - **Error handling gaps**: Systems halt at the first error in batch processing rather than compiling full reports, increasing resolution time. - -- ## Task Management and Tracking Improvements Efforts to streamline task oversight and accountability were discussed, though execution barriers persist: - **Tracker consolidation**: A unified process map (e.g., via Whimsical) is proposed to align all system workflows (PDF pathways, bill audits) with corresponding trackers, replacing siloed reports. - **Modernization hurdles**: - Migration tasks (e.g., .NET 7 to 8 upgrades) are deprioritized amid urgent operational issues. - Azure board updates are inconsistent due to permission constraints and unclear ownership for moving tickets to "Done." - **Dashboard development**: An integrated customer-view dashboard (pulling Power BI and HubSpot data) is in progress to provide real-time visibility into billing/payment statuses. - -- ## Strategic Direction and Next Steps A long-term shift from reactive fixes to foundational rebuilding was emphasized, acknowledging the need for parallel firefighting and structural work: - **Process re-engineering**: Focus on automating validation, batch error resolution, and end-to-ingestion pathways to eliminate manual touchpoints within six months. - **Cross-functional alignment**: Weekly syncs proposed to audit process flows, enforce tracker standardization, and break down silos between DSS, UBM, and data teams. - **Customer-centric prioritization**: Initial focus on DSS ingestion, followed by UBM optimization and bill-pay systems, reflecting the invoice processing hierarchy.
Mircea <> Faisal Intro
## Summary ### Introductions and Backgrounds The meeting began with mutual introductions where both participants shared their professional backgrounds and personal interests. Faisal, newly joining the team, outlined his role in taking over UBM and DSS responsibilities, emphasizing his product management experience at Finmark (a Y Combinator startup) and Bill.com. His focus includes stabilizing data ingestion processes and scaling systems to handle increased customer volume. Mircha, a DevOps engineer with seven years of experience, primarily supports Data Services (DS) and highlighted technical challenges in the current infrastructure. Personal interests like outdoor activities, sports, and AI's role in development were briefly discussed. ### Current Challenges and Roadmap Critical pain points and strategic direction for Data Services were addressed: - **System Scalability Issues**: Rapid customer growth has exposed infrastructure limitations, making the system unable to handle doubled volume without significant re-engineering. - **OpenAI Processing Delays**: Batch processing via OpenAI faces severe latency (up to 30 minutes per request), linked to recent model updates. A support ticket is open, but monitoring is hampered by fragmented access. - **12-Month Vision**: Plans include stabilizing ingestion, refining LLM logic, and eventually opening the DS API to external users. Immediate focus is on resolving technical debt before feature expansion. ### Infrastructure and Access Management Infrastructure complexities and operational bottlenecks were detailed: - **Multi-Environment Fragmentation**: - OpenAI resources reside in *Constellation’s Azure* due to billing constraints, while other components remain in *Final Data Group*. - A stalled migration to consolidate environments was deprioritized for feature development. - **Access Risks**: - Single points of failure exist (e.g., only Mircha has full OpenAI access). Faisal will request backup access via Chris Busby’s established process. - Cognizant device limitations hinder real-time issue tracking, requiring workarounds like browser profiling for separate tenant logins. ### Technical Debt and Stability Efforts Legacy systems and deployment gaps threaten reliability: - **Pipeline Deficiencies**: Most components lacked CI/CD pipelines until recently. Mircha is urgently documenting deployment processes for the legacy desktop operator tool before his leave. - **Monitoring Shortfalls**: Azure-based dashboards track OpenAI latency/tokens but lack granular insights. Correlation shows high latency during large requests (e.g., 34K tokens). - **Team Coordination**: Ad-hoc task management via DMs replaced structured tracking, delaying migration and debt reduction. ### Immediate Actions and Availability Next steps and scheduling constraints were clarified: - **Holiday Coverage**: Mircha will be out October 2-12, plus preceding Friday/Monday. Deployment documentation will be shared for emergency use. - **Follow-Up Planning**: Faisal will schedule deep-dive sessions to address scalability, technical debt, and migration resumption post-stabilization. - **Cross-Team Backup**: Matthew (Final Data Group) retains system access, providing interim support during absences.
DSS Plan
## Summary ### Invoice Processing Backlog and Microsoft Azure Issues A significant backlog of approximately 3,000 invoices has accumulated due to delays in Microsoft Azure's OpenAI processing queue. Normally, invoices process within 10 minutes, but since mid-last week, responses have been severely delayed-sometimes taking over 24 hours-despite Microsoft's contractual 24-hour processing window. Key details include: - **Throttling implemented**: Ingestion was slowed to 2-3 invoices every 10 minutes to avoid overwhelming Microsoft's systems, though web uploads bypass queues and go directly to Azure. - **Engineers engaged**: A support ticket was escalated to Microsoft engineers after initial troubleshooting failed, indicating the severity of the issue. - **Failures compounding delays**: Some invoices in the queue are failing outright due to Azure-side rejections, further complicating resolution. ### System Integration and Status Synchronization The disconnect between DSS (Data Services System) and the legacy billing system creates operational friction, especially when invoices are manually processed outside DSS. Critical gaps identified: - **No real-time status updates**: DSS doesn’t reflect whether bills are processed, in queue, or failed, forcing manual checks via Batch Management or Common Bill Format reports. - **Risk of duplication**: If backlogged invoices eventually process through Azure, they might duplicate efforts or create conflicting records if already manually handled. - **No bulk tools**: Manually hiding processed invoices in DSS is impractical at scale (e.g., thousands of bills), highlighting the need for automated syncing. ### Processing Exclusions and Edge Cases Certain invoices must bypass automated LLM (Large Language Model) processing due to unique requirements, but current systems lack granular controls. Priority exclusions needed: - **Client-specific instructions**: Bills requiring custom notes (e.g., "process as live for payment") are skipped by LLM, causing audit gaps and missing critical directives. - **Vendor/account exceptions**: Examples include SoCalA’s service-agreement IDs being misclassified as new sub-accounts, and delivery vendors needing manual cross-referencing of prior bills. - **High-impact scenarios**: Natural gas supply meters incorrectly tagged with "generation use" by LLM cause 90-95% of observed output errors. ### Technical Defects and Output Errors Specific recurring technical issues disrupt billing accuracy and require targeted fixes: - **Double-counting distribution costs**: Observed in UGI bills where distribution charges appear as both "use cost" and "adjustment use cost" lines due to caption-matching failures in LLM. - **Empty line items in CSVs**: Blocks with zero usage/charges generate blank lines in output files; a fix purges these blocks pre-export. - **Natural gas meter tagging**: LLM misapplies "generation use" to supply-only meters, distorting cost allocation (separate from distribution double-counting). ### Prioritization and Next Steps Three high-priority fixes were identified to address operational bottlenecks: 1. **Synchronize DSS with legacy systems**: Essential to prevent duplication and reflect real-time bill statuses without manual intervention. 2. **Implement LLM exclusions**: Enable bypass rules for vendors/clients needing manual processing or custom instructions. 3. **Resolve natural gas tagging**: Correct LLM logic misapplying "generation use" to supply-only meters, a top source of output errors. - **Web download report pending**: A real-time vendor/operator report is under review after initial feedback indicated usability issues.
DSS/UBM sync
## Customer The customer is Constellation, an organization utilizing UBM and DSS platforms for utility bill management. The product owner recently joined three weeks ago and is establishing processes to capture user feedback for product improvement. The operations team is actively managing bill processing, onboarding new clients, and addressing platform-related challenges. ## Success A significant achievement highlighted was the successful onboarding of Dakota (URA) onto the platform. While initial communication about its completion status was unclear, confirmation was provided that all necessary setup in DS and EM systems was finalized, enabling the customer to utilize the platform effectively for their needs. ## Challenge The primary challenge involves managing **duplicate payments and processing errors**, particularly concerning mock bills. These mock bills trigger account freezes (indicated by a snowflake icon), requiring manual intervention to apply cancellation bill blocks and prevent subsequent invoices from being paid twice. Additionally, the team faces: - **Platform Data Issues:** Instances of duplicate bills appearing on the UBM platform and incorrect observation types for line items, suggesting potential breaks between systems. - **Onboarding Delays & Credential Gaps:** Incomplete or delayed credential provisioning during onboarding, causing setup delays and operational bottlenecks. Ensuring all required credentials are provided upfront when onboarding sheets are submitted is a critical focus. - **Backlog Management:** A significant backlog of older bills (pre-September) requires dedicated effort to process, necessitating improved account assignment strategies based on due dates. ## Goals Key customer goals identified during the meeting include: - **Immediate Go-Lives:** Successfully launching Aspen and Cedar Street on the upcoming Friday. - **Onboarding Completion:** Finalizing the onboarding process for Simon, requiring the receipt and processing of account data for all regions (beyond the initially provided California and Texas data). - **Targeted Launches:** Ensuring Quadrm goes live as planned on November 26th (focusing on AP file integration, non-bill pay). - **Resolution for Cobia:** Determining a clear path forward for Cobia's AP file requirements (standard vs. custom CSV with PDF renaming vs. API) and establishing a realistic go-live timeline based on credential setup and utility count. - **Transition Planning (Big Y):** Providing Big Y with guidance on the optimal timing for overlapping service with their current provider (NG) during the Constellation transition, with a general recommendation of a one-month overlap period while cutting over payment processing completely. - **Process Improvement:** Implementing a new ticketing system to replace email-based issue reporting and enhancing the onboarding sheet review process to ensure completeness before setup begins.
OpenAI Monitoring
## Summary ### Critical OpenAI API Performance Issues Significant delays in OpenAI API responses have created a processing backlog of approximately 3,000 invoices, severely disrupting operations since September 9th/10th. The system is currently processing only minimal volumes (e.g., 2 invoices per half-hour), rendering it ineffective for current demands. This marks the second major API performance incident within a month, indicating an underlying systemic issue requiring deeper investigation by Microsoft/OpenAI support teams. Key observations include: - **Backlog severity**: 3,000+ invoices are stalled due to unresponsive API endpoints, with no significant processing occurring over the weekend. - **Support engagement**: A ticket has been escalated to Microsoft/OpenAI support, copying relevant internal stakeholders, though resolution timelines are uncertain. - **Monitoring gap**: No proactive alerting exists for API latency or backlog thresholds, delaying issue detection. ### Infrastructure Access and Redundancy The current setup creates a critical single point of failure, as only one individual (Mircha) has administrative access to the Constellation environment hosting the OpenAI integration. Mitigation plans include: - **Access mirroring**: Urgent need to provision identical administrative access for Faisal via SailPoint/Z-key authentication to eliminate reliance on a single person, especially given Mircha's unique access to the environment configuration. - **Environmental context**: Confirmed that the issue is isolated to OpenAI API responsiveness and unrelated to the ongoing Azure migration. ### Project Management Tool Migration A decision has been made to transition from Azure DevOps to Jira for improved scalability and workflow management. This requires: - **Backlog audit**: A thorough review of the existing Azure DevOps board (containing ~30 "Ready" items) is needed to identify valid tickets for migration, prioritizing DSS-related work while deprioritizing legacy system enhancements. - **Process change**: Afton will be granted access to the new Jira board (or channel requests through Faisal) to centralize task tracking, requiring team-wide adoption of the new workflow. - **Execution plan**: Scheduling a session this week with Shake to clean up and categorize the backlog before migration, with items needing Gaurav's input set aside until his return. ### User Activity Reporting for Afton A user activity report was delivered to Afton, but data limitations prevent fulfilling all requested details: - **Report scope**: Includes all active Constellation users and tracks "first build processed" and "last build processed" as proxies for user engagement, as login/logout timestamps are not recorded. - **Unavailable metrics**: Cannot differentiate between invoice downloads initiated via FDG Connect web interface versus email/file exchange due to lack of source tracking in the database. Total download metrics encompass all methods. - **Temporary solution**: The provided report is considered an interim measure; long-term logging enhancements would be needed for granular data but are currently deemed low priority. ### Strategic Direction and Backlog Prioritization Long-term planning for DSS (Document Storage System) will be formalized in an upcoming meeting with the full team, focusing on a 6-12 month roadmap. Immediate backlog prioritization emphasizes: - **Legacy system sunsetting**: Explicit deprioritization of tickets related to enhancing legacy systems (often tagged to Matthew), aligning with the strategic shift towards the BSS2 UBM workflow. - **DSS focus**: Valid DSS-related tickets from the existing backlog will be migrated to Jira to maintain continuity, while irrelevant items will be archived.
Jay <> Faisal
## Customer The customer utilizes a utility bill management (UBM) platform to capture, validate, and analyze utility bills across multiple locations. Their operations involve managing complex hierarchies of locations (e.g., buildings, floors, departments) and require extensive customization through attributes to support industry-specific reporting and analytics. Key operational needs include bulk onboarding of location data and attributes, robust bill validation, and scalable data ingestion processes to handle large volumes efficiently. ## Success The platform's **custom attribute functionality** stands out as a significant success. This feature allows deep customization at the location level, enabling customers to define and capture industry-specific data points (e.g., building types for real estate, department codes for healthcare). This flexibility supports advanced grouping, reporting, and analytics, providing a critical competitive edge over solutions lacking such granular customization. The ability to bulk-load attribute values for thousands of locations via templates further enhances operational efficiency during onboarding. ## Challenge The most critical challenge is **scaling data ingestion and validation processes**. Persistent issues with bill ingestion have led to unresolved validation errors, forcing temporary measures like auto-closing bills without customer notification. This undermines the platform’s core value proposition of proactive validation and alerting. Additionally, the lack of a structured system to capture and act on customer feedback (e.g., from surveys like Delighted) exacerbates dissatisfaction, as vocal customers report frustrations-particularly regarding data handling-without a clear resolution path. ## Goals Key customer goals identified include: - **Implementing structured feedback capture**: Formalizing processes to gather, analyze, and act on product feedback from surveys and direct customer interactions. - **Resolving data ingestion scalability**: Addressing technical bottlenecks to ensure reliable bill processing and validation at scale. - **Establishing backlog management**: Creating a centralized system (e.g., Jira EPIC) to track and prioritize validation issues and technical debt for timely resolution. - **Revisiting disabled validations**: Ensuring temporary measures (e.g., auto-closing bills) are transitioned to permanent fixes that restore core validation functionalities.
Faisal/Sunny Sync
## Ulta Gas Design Diagram Request A request for updated Constellation design diagrams requires clarification before proceeding. - **Need for context**: The broad nature of the request necessitates understanding whether it relates to payments, UBM API integrations, or other pain points. - **Action taken**: Sales team (Alex/Des) was contacted for background details, but no response received yet. Alison Diller (commodity sales) and Tim are potential contacts if follow-up is needed by Tuesday. - -- ## Okta SSO Implementation Challenges Bureaucratic hurdles are delaying Okta SSO deployment, critical for enterprise customer access. - **Procurement complexities**: Constellation's internal procurement team requires precise documentation (PO numbers, contracts, emails), creating delays despite customer demand for SSO. - **Relationship management**: Diana is identified as the responsible party; proactive outreach is recommended to emphasize urgency without straining relations, given her team's workload. - -- ## Pay Clearly Report Handling A recurring CSV report from Pay Clearly is currently managed manually but represents a fragmented data process. - **Operational context**: The report provides customer updates from a key vendor, requiring manual consolidation with other data sources for customer support. - **Future integration opportunity**: This CSV could be automated via FTP into UBM’s admin view, eliminating manual tracking and aligning with broader platform unification goals. No immediate action is required, but it’s noted for long-term workflow optimization. - -- ## Internal Process Observations Fragmented communication and documentation practices are creating operational inefficiencies. - **Email coordination gaps**: Follow-ups with sales/internal teams (e.g., Alison, Tim) are needed to close loops on external requests like Ulta Gas’s diagram. - **Vendor management**: Proactive stakeholder engagement (e.g., with Diana for Okta) is essential to navigate bureaucratic delays while maintaining partnerships. - -- ## Vendor Data Strategy Disparate vendor reports highlight the need for centralized data management. - **Current workflow pain point**: Manual handling of reports like Pay Clearly’s CSV creates redundancy and limits real-time visibility. - **Platform solution**: A unified dashboard or automated data ingestion via UBM could consolidate vendor inputs, transforming ad-hoc updates into actionable insights.
Review Gaurav Meeting
## Summary The discussion focused on team alignment, project priorities, and resource planning for the DSS (Data Services System) ecosystem. Key context includes: - The participant has been with Constellation since 2017 but is primarily dedicated to Arise Energy, supporting the DSS team temporarily to stabilize the system. The goal is to transition responsibilities to Shake. - Significant progress has been made: the system now processes approximately 62,000 invoices daily, a major improvement from handling only a few hundred previously. This was achieved by bypassing legacy bottlenecks and implementing rapid fixes. - The immediate technical priority is developing a new output service within the DSS ecosystem to replace the flaky legacy output service. This is critical because processed invoices must reliably reach UBM (Upstream Business Management) to have value for customers. - Resource constraints are a concern, with only one dedicated engineer (Shake) currently active, as another engineer (Matthew) is on extended paternity leave. This raises questions about capacity for upcoming DSS goals over the next 6-12 months. - Efforts are underway to improve processes: migrating from Azure Boards to JIRA for better collaboration and establishing regular, short daily check-ins (starting the week of the 22nd) for faster onboarding and alignment. Documentation accuracy and completeness, particularly regarding system architecture (IPS - Invoice Processing Service) and data flows, are also a focus. ## Wins - **Successfully stabilized the DSS system**, enabling a dramatic increase in processing capacity from hundreds to **over 62,000 invoices per day**. - **Implemented rapid development and deployment cycles** to address critical issues, effectively "turning the ship around" on system performance. - **Established a functional pathway** for AI-powered invoice processing via IPS, feeding data back into DSS for downstream handling. - **Created mechanisms** for the operations team to reprocess failed invoices directly within DSS in many cases, reducing reliance on legacy system interventions for transient issues. ## Issues - **Critical Resource Shortage:** The team currently operates effectively with only one active engineer (Shake), insufficient for both ongoing maintenance and planned future enhancements for DSS. Another engineer (Matthew) is unavailable on extended leave. - **Fragile Legacy Integration:** The existing legacy output service is unreliable ("flaky") and a black box (no source code, limited control beyond restarting). This creates a single point of failure for getting data to UBM. - **Documentation Gaps & Accuracy:** Existing documentation (e.g., architecture diagrams, data flows) may contain outdated elements or lack detail on certain processes (like specific failure recovery paths between DSS and the legacy system). Ensuring current and comprehensive docs is vital. - **Knowledge Transfer & Bus Factor:** Heavy reliance on specific individuals (like the participant) for deep system knowledge (especially IPS) poses a risk as the transition plan involves them reducing involvement. Proactive knowledge capture is needed. - **Technical Debt:** Past rapid fixes, while necessary for stabilization, may have introduced solutions that are not scalable or sustainable long-term ("duct tape"), requiring future rework. - **Communication Tool Fragmentation:** Team communication is split between Microsoft Teams and Slack, hindering cohesion. ## Commitments - **Shake:** Will complete work on the BG Web report (needed for monitoring contractor throughput) and then immediately focus on developing and implementing the **new DSS output service** to replace the legacy component, ensuring data flows reliably to UBM. (Owner: Shake) - **Shake & Participant:** Collaborate closely on the design, implementation, and backup mechanisms for the new DSS output service. (Owners: Shake & Participant) - **Participant:** Provide deep-dive knowledge transfer sessions on the **IPS (Invoice Processing Service)** architecture and functionality to ensure comprehensive understanding. (Owner: Participant) - **Team:** Utilize and validate existing architecture and data flow documentation (e.g., CDS File Flow, IPS docs in Confluence), identifying and filling gaps to create a single source of truth. (Owner: Team) - **Team:** Establish and adhere to **daily check-in meetings** starting the week of the 22nd (scheduled for 9:30 AM Eastern on the 22nd & 23rd, and 11:30 AM Eastern on the 19th initially) for alignment and rapid issue resolution. (Owner: Team) - **Team:** Migrate task tracking from Azure Boards to **JIRA** to improve workflow and cross-team collaboration. (Owner: Team)
UBM Planning
## Meeting Purpose and Context The meeting initiated Constellation Audit Services' (CAS) enablement engagement to support the Navigator team in expanding UBM and carbon accounting platforms for U.S. customers with international locations. This collaboration aims to establish robust processes and controls for the expansion while mitigating risks, leveraging CAS’s expertise in advisory (non-audit) support. - -- ## Introductions and Team Backgrounds Participants introduced their roles and experience: - **CAS Team**: - *Manager*: 15 years in internal audit, focusing on assurance and advisory engagements. - *Director*: 9 years at CAS, overseeing operational audits and IT/cyber initiatives. - *Senior Analyst*: 10 years in operational audits, experienced in enablement engagements. - **Navigator Team**: - Leadership includes a senior manager of customer success/operations, product management leads, and analysts. - Members emphasized backgrounds in sales support, legal operations, and product commercialization, with tenure ranging from 3 weeks to 17 years at Constellation. - -- ## CAS Role and Engagement Scope CAS clarified its function as an independent advisory partner: - **Enablement Focus**: - Providing proactive guidance on process design, controls, and risk mitigation-distinct from traditional audits. - Preserving independence by avoiding decision-making ownership; instead, offering insights based on cross-company best practices. - **Strategic Alignment**: - Supporting Navigator’s growth goals while embedding compliance and control frameworks early in the expansion effort. - -- ## Engagement Methodology (DARE Framework) The process follows CAS’s structured **DARE** protocol: - **Define Phase (Current Stage)**: - Background research completed; kickoff meeting aligns stakeholders on objectives and next steps. - **Assess Phase (Next Steps)**: - Collaborative working sessions to document processes, controls, and guardrails for international expansion. - **Report Phase**: - Will summarize support provided and suggestions (no formal audit findings). - **Enable Phase**: - Post-engagement survey to refine CAS’s support model. - -- ## Responsibilities and Independence Clear delineation of roles was emphasized: - **CAS Responsibilities**: - Facilitating discussions, advising on documentation (e.g., process maps, control points), and sharing best practices from SOX/ICFR experience. - **Navigator Responsibilities**: - Driving timelines, providing process expertise, and owning final decisions to maintain CAS independence. - -- ## Documentation and Process Review Key aspects of the expansion framework were discussed: - **Existing Materials**: - Regulatory/legal memos, contracts, and SOC reports have been approved; the focus is now formalizing end-to-end documentation. - **Collaboration Plan**: - CAS will join working sessions with Navigator’s analyst to translate existing materials into structured documentation using established templates. - **Guardrails**: - Emphasis on aligning processes with pre-approved international expansion protocols to ensure compliance. - -- ## Timeline and Next Steps Targets and logistics for the engagement: - **Deadline**: - Documentation and review targeted for **October**, pending Navigator’s availability during peak selling season. - **Immediate Actions**: - Navigator’s analyst to share a draft document; CAS to schedule working sessions for iterative feedback. - **Communication**: - Primary contact established with Navigator’s analyst for coordination, with broader team involvement as needed. - -- ## Reporting and Distribution Final deliverables and stakeholder alignment: - **Engagement Report**: - Will outline CAS’s support, suggestions, and scope (not findings). Draft to be shared for review before distribution. - **Distribution List**: - Includes Navigator leadership, CAS, Ethics & Compliance, and Risk partners. Specific additions requested: George Esseveda (Navigator) and Faisal (product management). - **Transparency**: - Open feedback encouraged throughout to ensure alignment and clarity.
Weekly DSS call
## Summary ### Process Improvement and Ticket Management Significant focus was placed on establishing clearer workflows and visibility into ongoing work. Key initiatives include migrating from Azure Boards to Jira within the next few weeks to centralize issue tracking, enhance automation, and improve team access. This transition aims to address the current lack of clarity on priorities and progress, enabling better decision-making when new requests (like those from Afton) arise. The migration will start manually to facilitate learning and process refinement before full automation. - **Jira Implementation:** Transitioning from Azure Boards to Jira will provide a unified system for capturing bugs, enhancements, and feature requests, improving prioritization and visibility across teams. - **Prioritization Framework:** Establishing a defined process will allow for dynamic reprioritization, such as pushing back lower-priority items when critical new requests emerge. - **Initial Manual Phase:** The migration will begin with manual oversight to ensure thorough understanding and refinement of the new workflow before full automation is implemented. ### Data Acquisition Queue and Invoice Processing The automated system for processing the invoice backlog is now operational and successfully cleared the accumulated queue over the previous weekend. Invoices are now processed promptly upon arrival. Efforts are underway to enforce consistent processing through the DSS system by removing manual overrides. - **Queue Resolution:** The automated ingestion mechanism is functioning effectively, eliminating the backlog and ensuring timely processing of new invoices as they arrive. - **Enforcing DSS Processing:** The toggle allowing operators to bypass DSS for web download invoices will be removed, mandating all non-exception invoices flow through the automated DSS pipeline. - **Handling Exceptions:** Only summary invoices and non-US dollar (e.g., Canadian) invoices will continue to use the legacy process; any exceptions attempting DSS processing will be flagged for manual handling. - **Performance:** System performance issues (e.g., slowness) have been resolved, with processing typically completing within 24 hours. ### Technical Issues and Solutions Specific technical challenges were addressed, including a fix for blank/zero-dollar invoice blocks causing UBM ingestion failures and plans to decommission a legacy output service. - **Blank/Zero-Dollar Invoice Fix:** A patch has been developed to remove empty distribution blocks (common in supply-only invoices) that caused UBM failures, preventing them from entering the ingestion pipeline as noise. Testing for this fix is pending. - **Legacy Output Service Bypass:** Development is underway on a solution to bypass the legacy service (responsible for bundling DSS output for UBM), which is prone to errors like the blank invoice issue. This new path will drop processed invoices directly into UBM's intake folder. While not yet operational, completion is expected within days. - **Legacy Queue Impact:** Invoices currently stuck in the "ready to send" status within the legacy system must still be processed manually, as the new bypass solution cannot rerun them. ### Reporting and Upcoming Initiatives Preparation is ongoing for a new team starting on September 17th, focusing on generating a critical report for tracking download activities. - **Download Activity Report:** A report detailing download counts per individual and per utility, including instances where users attempted downloads but encountered "snooze" (unavailability), is under development. This report is essential for both the new team and internal tracking, with completion targeted before the September 17th start date. ### Resource Allocation A note was made regarding competing priorities, with one team member experiencing increased demands from Rise Energy projects, potentially impacting bandwidth for DSS-related tasks.
DSS Check-in
## Summary The discussion focused primarily on current work priorities and upcoming tasks related to the FDG portal. An update was recently deployed to production, enabling the display of outstanding work items per vendor within the portal. A new request from Afton requires creating a client-centric view in the portal, allowing users to filter and see all tasks associated with a specific client. This feature is recognized as more complex and may take 1-2 days to implement. As an interim solution, a Power BI report will be developed to fulfill the immediate need for client-level task visibility. There was some uncertainty regarding the priority of another request from Afton concerning web downloads/uploads, mentioned in a previous email. Clarification on its urgency and requirements compared to the client view task is needed. The goal is to establish a clearer pipeline of work spanning 2-4 weeks to improve planning and ensure continuity. ## Wins - Successfully deployed an update to the FDG portal in production, adding functionality to list and count all outstanding work items per vendor. - The deployed vendor-specific task view is now live and operational for users. ## Issues - Scheduling a meeting with Godo is complicated due to managing two separate calendars without shared access, requiring coordination via messages. - Clarification is needed on the priority and specific requirements of Afton's separate request related to web downloads/uploads mentioned in an email, compared to the active client view portal task. - There is a desire to build a more robust work pipeline (2-4 weeks worth) to facilitate better planning and mitigate disruptions. ## Commitments - A Teams thread will be initiated with Afton (including relevant parties for visibility) to clarify the priority of her requests (client view vs. web downloads) and confirm requirements. - The interim Power BI report providing a client view of tasks will be delivered promptly. - Development of the permanent client view filter within the FDG portal will commence, with an estimated completion timeframe of 1-2 days once started. - The web downloads/upload request will be addressed based on the clarified priority and requirements gathered via the Teams thread, aiming for readiness before Wednesday of the following week to support contractor onboarding.
Faisal <> Sunny
### Summary The meeting focused on strategic priorities and operational improvements across key product areas, particularly Data Services (DSS) and Utility Bill Management (UBM). Key themes included migrating DSS functionality into UBM to eliminate redundancy, addressing customer dissatisfaction due to overpromising in sales, and establishing robust processes for tracking work and customer sentiment. The Azure migration for UVM was discussed as a current challenge causing delays, with plans to mitigate risks. Sunsetting legacy systems like DDS and FDG Connect was prioritized, though phased over six months to ensure stability. Bill pay automation emerged as a critical pain point, with exploration of vendor solutions to replace manual processes. ### Wins - Significant reduction in the DSS backlog compared to previous months. - Access to JIRA secured for better project management and future automation capabilities. - Two new engineers identified to accelerate development of a standalone API service for DSS. ### Issues - **Azure Migration Complications**: Unresolved tickets and delays in migrating UVM from GCP to Azure, impacting timelines. - **Legacy System Redundancy**: DDS and FDG Connect create operational inefficiencies and syncing errors with UBM. - **Customer Dissatisfaction**: Overpromising during sales and prolonged delays (e.g., PPG’s unmet 9-month deliverables) eroding trust. - **Process Gaps**: Lack of standardized workflows for tracking team throughput, estimating work, or quantifying customer sentiment (NPS). - **Bill Pay Inefficiencies**: Manual, error-prone processes causing client frustration and operational bottlenecks. ### Commitments - **Process Standardization**: Transition project tracking from Azure DevOps to JIRA, enforce story point estimates for all work items, and document team velocity. - **Legacy System Sunset**: Phase out DDS over six months after ensuring DSS pipelines are stable and supported by new developers. - **Customer Sentiment Tracking**: Implement NPS monitoring in HubSpot to quantify dissatisfaction drivers and prioritize fixes. - **Bill Pay Automation**: Evaluate vendor solutions (e.g., Ramp) to replace manual workflows and enable multi-tenant capabilities. - **Product Consolidation**: Integrate FDG Connect’s essential functions into UBM before decommissioning the standalone tool.
DSS Check-in
## Workflow and Ticketing System Challenges The current workflow relies heavily on an Azure Kanban board for tracking DSS and FDG Connect tasks, but visibility is limited-only Rachel (Customer Success) has access, creating a bottleneck. Tickets are addressed reactively as they appear, with no standardized prioritization or sprint planning. Feedback primarily flows through Teams messages, which is inefficient for tracking and leads to fragmented communication. - **Azure DevOps limitations**: The board lacks automation capabilities and causes authentication hassles, prompting consideration of migrating to Jira for better workflow automation and user experience. - **Feedback management**: User-reported issues are currently handled ad-hoc via Teams tags, which fails to provide auditable trails or structured prioritization. - **Story point inconsistencies**: Existing tickets show sporadic story point assignments without standardized sizing or team consensus, making velocity tracking impossible. ## Team Structure and Responsibilities The DSS team comprises two primary developers and a DevOps/DBA support unit. Key roles include: - **Development**: Focused on bug fixes and feature enhancements for DSS and FDG Connect, with Gaurav being the original architect. - **Support roles**: Machia (DevOps) manages Azure infrastructure and pipelines, while Andre handles database administration. - **Cross-functional dependencies**: Rachel acts as the primary liaison for customer requests, but she’s overloaded with triaging and ticket creation due to the lack of self-service tools. ## Product and Technical Debt DSS suffers from legacy issues and premature rollout challenges, while FDG Connect has unresolved bugs from previous development efforts. - **DSS stability**: A critical double-counting usage bug was recently resolved, but sporadic issues persist due to inadequate monitoring. - **FDG Connect technical debt**: Previously implemented features often fail user expectations, requiring rework-partly due to rushed deployments without phased testing. - **Legacy system friction**: The outdated DJS application remains in use alongside FDG Connect, creating redundancy and user confusion; sunsetting it requires a phased migration strategy. ## Process Improvement Initiatives Immediate changes aim to introduce structure while longer-term solutions are developed. - **Temporary workflow fixes**: All issues must now be logged in Azure DevOps-even those reported via Teams-to centralize tracking until a better system is implemented. - **Documentation and estimation standards**: Engineers must add story points (3/5/8 scale) and maintain task documentation to enable velocity measurement. - **Work-in-progress limits**: Strict enforcement of focusing on single tasks to reduce context-switching and improve throughput. ## Future Planning and Strategic Shifts The focus will shift from maintaining FDG Connect to enhancing DSS automation, with emphasis on foundational improvements. - **DSS prioritization**: Upcoming work will target critical usability gaps like bulk operations, filtering, and sorting to reduce manual auditing. - **Phased rollouts**: New features will deploy incrementally (e.g., 10% user cohorts) to catch issues early, contrasting with past "big-bang" releases. - **Tooling overhaul**: Jira adoption is proposed to replace Azure DevOps, enabling automated workflows and broader stakeholder access. - **Legacy sunset plan**: DJS retirement will align with DSS achieving feature parity, ensuring users transition smoothly via parallel operation periods.
Teams Review
## Summary The discussion centered on role transitions, process improvements, and strategic planning across key operational areas. For bill pay operations, immediate plans involve assigning a knowledgeable team member to assist with data repairs starting November/December, followed by a six-month transition to stabilize and automate the system. This addresses current inefficiencies where manual customer-specific configurations consume excessive engineering resources. Regarding vendor strategy, the organization remains locked into Pay Clearly for approximately one year due to integration complexities and testing requirements. While exploring alternative vendors for future automation is encouraged, day-to-day stability takes precedence. Participation in vendor discussions should focus on understanding pain points and integration requirements without diverting from core responsibilities. For DSS operations, immediate actions include establishing daily check-ins with engineering leads to improve issue visibility and documentation. A critical gap exists due to the lack of product ownership, necessitating better tracking of recurring themes and validation that operational feedback is properly understood by developers. Current reporting relies on a Power BI queue view, with plans to transition from Teams-based communication to a formal ticketing system within three weeks after refining processes. Longer-term planning includes developing 30/60/90-day role prioritization frameworks to clarify responsibilities across ops, dev, and other teams, plus scheduling an office visit this month to align with key stakeholders during their Tuesday office days. ## Wins - Clear roadmap defined for bill pay system stabilization and automation - Strategy formalized for vendor management transition timeline - Daily check-ins established to address DSS visibility gaps - Concrete timeline set for ticketing system implementation ## Issues - Bill pay requires extensive manual per-customer configuration - Pay Clearly vendor lock-in limits automation opportunities - DSS lacks product ownership causing documentation gaps - Ad-hoc Teams communication hinders issue tracking - Role ambiguity exists across operational teams ## Commitments - Assign resource to bill pay data repairs by November/December (Owner: Management) - Maintain Pay Clearly operations for ~1 year while evaluating alternatives (Owner: Team) - Conduct daily DSS syncs with engineering leads (Owner: Team Member) - Implement formal DSS ticketing system in 3 weeks (Owner: Team) - Develop role prioritization framework (Owner: Both) - Schedule office visit this month (Owner: Management)
UBM Planning
## Summary The discussion focused on clarifying role transitions and priorities moving forward. Jay will transition from her current role to focus on data repairability as an analyst, with her official reclassification expected within 1-2 months. This shift necessitates taking over as the official Product Owner for UBM. The immediate priority is dedicating significant effort to understanding and managing the entire DSS process, which represents approximately 90% of customer value. This includes bill downloads (internal/external), data transformation, and synchronization with the UBM batch. Concurrently, enhancing visibility into the UBM ecosystem-tracking customer bills and their statuses-is critical. Jay will focus on clearing existing UBM backlog items, primarily technical debt and historical baggage, to enable a fresh start for new UBM initiatives by December/January. Awareness of ongoing architectural discussions (e.g., potential move to Azure) is required, but deep involvement isn't needed unless critical blockers arise. The goal is to achieve a strong grasp of DSS and UBM processes by year-end to facilitate seamless sprint planning, story development, and release management starting early next year. ## Wins - Completion of the initial computer setup process. - Clear definition of the transition plan for Jay's role and the UBM Product Ownership. ## Issues - Persistent technical difficulties with VPN connectivity causing disconnections and login issues, impacting communication responsiveness. - Camera functionality problems during setup. - Uncertainty regarding the specific timeline and mechanics for the handover of UBM Product Owner responsibilities and backlog management from Jay. - The substantial volume of existing technical debt and historical backlog items in UBM requiring cleanup before new work can commence effectively. - The complexity and time-intensive nature of mastering the entire DSS process and UBM ecosystem visibility, identified as taking significant focus until year-end. ## Commitments - **Owner:** Dedicate maximum effort to mastering the **DSS process** (bill downloads, transformation, UBM batch sync) and establishing comprehensive **UBM ecosystem visibility** (customer bill tracking). - **Owner:** Maintain awareness of ongoing **architectural discussions** (e.g., Azure migration) within the IT team/Shared Constellation, escalating only critical blockers. - **Jay:** Focus on clearing the existing **UBM product backlog**, specifically addressing technical debt and historical items, to enable a clean slate for new work by December/January. - **Management:** Officially reclassify Jay's role towards **data repairability analysis** within the next 1-2 months. - **Management:** Formalize the transition to the new **UBM Product Owner role** by year-end, enabling full ownership of sprint planning, story definition, and release notes starting early next year.
[EXTERNAL]UBM Demo and planning
## Build and Deployment Issues Significant time was spent addressing technical challenges in build processes and deployment pipelines. A critical build error occurred on September 2nd due to a misconfigured environment variable that prevented inclusion of development dependencies, requiring code reversion and extensive debugging. Separately, optimizations were made to pre-production builds by eliminating unnecessary approval steps that caused delays, as these were deemed non-essential for non-production environments. - **Environment variable failure**: The September 2nd master branch failure stemmed from a new variable disrupting dependency inclusion, complicating debugging due to non-obvious symptoms. - **Pre-prod workflow refinement**: Removal of redundant deployment approval steps accelerated testing cycles by reducing administrative overhead. ## Feature Development Updates Multiple feature enhancements were implemented across customer-facing modules. SLA configurations in FineBiz and Customer Info now dynamically adjust based on payment settings: enabling payments reduces SLA timelines to 3 days, while disabling extends them to 5 days. Emergency payment controls were hardened to disable buttons after activation, preventing duplicate processing. - **Dynamic SLA logic**: Payment toggle directly influences service-level agreements, automating compliance adjustments. - **Report generation upgrade**: All newly created reports now auto-append timestamps to filenames using the user's local timezone, resolving ambiguity in version tracking. - **UI/UX refinements**: Removed obsolete "target hours" field from customer models and fixed hierarchy tree/search functionality in billing modules. ## HubSpot Integration Improvements Critical fixes were applied to the HubSpot data sync job after validation errors caused repeated failures. Field-level validations were reconfigured to handle data type mismatches (e.g., numeric fields receiving empty strings) and unnecessary mandatory checks. - **Validation overhaul**: Removed redundant "required" flags on non-essential fields like Genesis numbers, allowing successful syncs even when optional data is absent. - **Data type resilience**: Implemented robust parsing for numeric fields to prevent job failures during string-to-number conversions. - **Backlog processing**: Repaired 1,860 previously stalled bills through targeted data remediation scripts. ## Quality Assurance Activities QA covered three releases with distinct focus areas: - **September 2nd release**: Validated standard AP file formats, Marcot GL updates, and service ZIP improvements. - **September 9th release**: Tested reconciliation data flows, auto-close error handling, and billing validation frameworks. Ongoing testing includes Auth0 integration, emergency payment disablement, and mock-to-actual bill linking. Four stories remain on hold pending bug fixes. ## Sprint Planning and Future Work Priorities for the upcoming sprint include: - **Payment system enhancements**: Finalizing payment list API backend and implementing amount cross-checking between AB files and PayClearly data. - **Automation initiatives**: Developing Jira auto-ticketing for job failures and expanding end-to-end test coverage. - **Technical debt**: Continuing Quasar framework package upgrades and Azure storage adapter implementation. - **Customer-specific deliverables**: Pacific Capital AP file development using reusable AppFolio templates to avoid custom solutions. - **Analytics expansion**: Adding Waste Analytics entitlements in Pathfinder and refining Power BI reconciliation reports. **Timezone clarification**: Report timestamp implementation will use end-user local time rather than GMT for filenames and metadata, ensuring consistency with user context.
DSS/UBM Errors
## Summary ### Data Processing Failures and Root Cause Analysis The meeting focused on recurring data ingestion failures during CSV file processing from DSS to UBM, primarily caused by empty lines or missing fields. Key issues include: - **Empty lines in CSV files**: Triggering "commodity missing" errors in UBM, as the system expects data in specific columns but encounters blank rows with only commas. - **Missing build blocks**: Identified in a specific file where "Build Block 2" data was absent in the output, leading to incomplete records. - **Legacy system limitations**: The DSS output service (a legacy component) is suspected of generating malformed files, but knowledge gaps exist since only one person (Matthew) fully understands it. ### Proposed Solutions for Error Handling Two parallel approaches were discussed to address ingestion failures: - **Immediate FTP-based workflow**: - Generate output files in the existing FTP location for quick implementation without new security configurations. - Provide daily/hourly failure reports to DSS for manual reprocessing. - **Long-term API integration**: - Replace FTP with direct API-to-API communication for real-time failure notifications and faster resolution. - Requires firewall/security adjustments but eliminates file transfer delays. ### UBM's Limitations in Error Resolution Constraints prevent UBM from autonomously fixing data issues: - **Lack of contextual data**: UBM cannot diagnose or resolve errors without access to DSS source systems or bill details. - **Data dependency rules**: Creating a valid bill requires specific mandatory fields (e.g., commodity type), which can't be overridden or left null. - **No bulk editing capability**: UBM lacks tools to correct batches en masse; each error must be addressed individually via DSS reprocessing. ### Debugging a Specific Failure Case Participants analyzed a real-world example to isolate the empty-line issue: - **File inspection**: A shared CSV revealed an empty row (row 2) containing only commas, causing UBM parsing to fail. - **DSS data validation**: The source data in DSS showed no gaps, confirming the error originates in the legacy output service during CSV generation. - **Pattern identification**: Empty lines consistently appear between build blocks or at file endings, disrupting UBM's row-based ingestion logic. ### Strategic System Transition The team emphasized migrating away from fragile legacy components: - **Future state vision**: Replace CSV outputs with JSON delivered via API, bypassing the error-prone legacy service entirely. - **Prioritization needed**: Transition requires testing in lower environments, but timeline depends on resource allocation (to be discussed with stakeholders). - **Short-term compromise**: Fix the immediate output service issue to reduce failures while planning the full migration.
Sync
## Summary ### Batch Upload Error Handling The meeting focused on addressing recurring issues with batch upload failures during CSV imports. Currently, the entire batch fails if any single bill contains errors like empty lines or missing commodities, forcing manual fixes and re-uploads. This process is unsustainable as error volumes increase. Key limitations include: - **Batch-level failure**: A single invalid bill causes rejection of all bills in the batch, creating operational bottlenecks. - **Limited error granularity**: Current logs only identify folder-level issues, not specific fields or files, complicating troubleshooting. - **Manual intervention risks**: Teams often open CSVs in Excel for fixes, which can corrupt formatting and compound errors. ### Data Ingestion Process The UBM system's ingestion workflow was clarified: 1. **Validation**: Checks for required fields upon CSV receipt. 2. **Transformation**: Converts Data Services CSV format to UBM-compatible structure. 3. **Processing**: Inserts data into the database and runs validation rules. 4. **State assignment**: Bills are flagged (e.g., integrity check, data audit) based on validation outcomes. A transition to **JSON-based ingestion** is underway, which will retain the same batch structure (ZIP with one JSON + one PDF per bill) but offer more flexibility. ### File Processing Workflow To resolve batch failure issues, a "one bill per batch" approach is being standardized for DSS workflows: - **Current limitation**: Legacy processes (e.g., summary/international bills) still bundle multiple bills per batch, causing cascading failures. - **Solution**: A new DSS output pathway will generate individual ZIPs per bill, isolating failures. Legacy pathways will remain temporarily for edge cases. - **Implementation**: Requires minimal development to automate ZIP creation and storage in predefined locations, with testing critical for handling diverse bill types. ### Scalability for New Formats The system's extensibility for future formats was evaluated: - **Adapter architecture**: UBM uses format-specific adapters (e.g., CSV, JSON), enabling incremental support for new types like XML. - **Low effort expansion**: Adding new adapters is feasible without major rework, though complexity depends on source data structure. - **Backward compatibility**: Existing formats (CSV) will remain supported alongside new additions like JSON. ### Error Management and Logging Discussions highlighted the need for robust error handling: - **Notification challenges**: Current email alerts for failures are inefficient at scale and lack filtering capabilities. - **Proposed improvements**: Centralized logs showing attempted imports with status (success/failure) and detailed error context would accelerate resolution. - **Sustainability**: Moving beyond manual email triage to self-service error dashboards is essential for operational scalability.
DS Ai Bill Reader and Arcadia
## Prospect The prospect holds a leadership position within Constellation Navigator, overseeing digital platforms including utility bill management and carbon accounting solutions. With over 24 years of industry experience, they focus on streamlining data integration for enterprise clients, particularly in automating utility data retrieval and enhancing sustainability reporting capabilities. Their background includes managing large-scale energy data projects and addressing operational challenges like multi-factor authentication (MFA) hurdles and credential management. Key priorities include optimizing bill payment workflows, expanding data coverage for diverse utility providers, and integrating interval data for advanced analytics. ## Company Constellation Navigator is a digital platform under Constellation Energy, specializing in comprehensive energy management solutions. Its core offerings include: - **Utility Bill Management (UBM)**: Handling invoice processing, data extraction, and payment workflows for commercial clients. - **Carbon Accounting**: Calculating emissions and supporting sustainability reporting for Constellation’s customers. - **Energy Optimization**: Using tariff analysis and interval data for efficiency modeling and cost savings. The platform is transitioning to a centralized data lake architecture to unify utility data, sustainability metrics, and energy management tools. This aims to reduce manual interventions, improve scalability, and enable cross-functional analytics across Navigator’s ecosystem. ## Priorities Constellation Navigator’s immediate priorities for enhancing their platform include: - **Automating Credential Management**: Seeking solutions to programmatically handle utility account credentials (e.g., via API) to minimize manual entry and reduce onboarding friction for large clients. - **Improving Data Reliability**: Ensuring high-accuracy PDF extraction and structured data retrieval with enhanced SLAs, especially for bill-pay customers requiring timely invoice processing. - **Expanding Utility Coverage**: Validating Arcadia’s coverage of specific utility providers and addressing gaps through template development for unsupported utilities. - **Enabling Advanced Use Cases**: - Integrating interval data for sustainability reporting and carbon footprint calculations. - Leveraging tariff data (via Signal) with actual consumption metrics for energy-savings analysis. - **Mitigating MFA Challenges**: Implementing proactive workarounds for utility-specific MFA disruptions to maintain data flow consistency. - **Scalable Data Integration**: Establishing a unified API connection to Arcadia’s Plug platform to feed Navigator’s data lake, supporting automated triggers for credential updates and invoice alerts.
Faisal re Bill Pay Pivot
## Prospect Faisal Alahmadi serves as a Product Owner supporting the Utility Bill Management (UBM) platform at Constellation Navigator. He recently joined the team approximately three weeks ago and brings prior experience from Bill.com, which informs his perspective on payment processing solutions. His role focuses on enhancing the platform's capabilities for bill payment and data management. ## Company Constellation Navigator operates as a specialized division within Constellation Energy, providing comprehensive data management services for clients. The company specializes in handling utility bills-including energy, water, sewer, gas, and waste-through its UBM platform. Key functions include: - Extracting and processing data from utility invoices. - Performing calculations and analytics for clients. - Managing bill payments on behalf of clients via a "for benefit of" (FBO) bank account. - Delivering accounts payable (AP) data directly to clients' financial systems. The platform targets recurring, high-volume utility expenses and seeks scalable payment solutions to integrate with its service offerings. ## Priorities The team's exploration of Ramp centers on three critical objectives: 1. **Payment Method Expansion**: Assessing Ramp’s support for ACH, paper checks, and credit card payments to complement existing payment options. 2. **Vendor Management Scalability**: Evaluating whether Ramp’s vendor setup can accommodate a parent-child relationship model, where Constellation Navigator acts as an intermediary managing payments for multiple client accounts. 3. **Integration Flexibility**: Determining if Ramp can accept pre-normalized payment data (e.g., vendor details, amounts, due dates) directly from Constellation’s platform, bypassing the need for traditional accounting software integrations. The discussion emphasized that partnership feasibility hinges on these technical and operational alignments, prompting plans to engage Ramp’s dedicated partnerships team for deeper evaluation.
DSS/UBM sync
## Summary The meeting served as an introductory discussion and initial alignment between a new team member focusing on DSS and the existing UBM Product Owner. Key topics included introductions, role definitions, current challenges with the DSS-UBM integration, and establishing collaboration rhythms. The new member brings a product management background from fintech startups, joining to primarily focus on resolving critical issues within the DSS system while supporting UBM. Significant challenges exist in the data flow between DSS and UBM, particularly regarding missing or inaccurate bill data causing processing failures and a backlog (e.g., 5k bills for PPG). This stems from DSS not consistently providing the data UBM requires for bill ingestion and payment processing, a previously untouched area now becoming critical. Conflicting understandings exist on whether data validation and correction should occur within DSS or UBM, requiring urgent strategic clarification. Resource allocation is strained between onboarding new customers, resolving the immediate DSS batch issues, and developing long-term solutions. Knowledge gaps about the DSS system and the specific integration points with UBM complicate troubleshooting and solution design. Establishing clear communication channels and working sessions involving both the DSS and UBM technical teams (including Romania-based engineers) was identified as essential. ## Wins - Successful initial connection and rapport established between the new DSS-focused member and the UBM Product Owner. - Clear identification of the core technical challenge: data mismatch and missing information in bills flowing from DSS to UBM. - Agreement on the necessity of a dedicated working session to map the DSS-UBM integration and define data requirements. ## Issues - **DSS-UBM Data Integration:** Critical issues with bill data (missing/inaccurate fields) from DSS causing failures in UBM processing, leading to significant backlogs (e.g., 5k bills for PPG) and potential payment delays/penalties for customers. - **Strategic Ambiguity:** Lack of clarity on whether data validation and correction should be handled within DSS before transfer or within UBM after receipt. - **Resource Prioritization Conflict:** Difficulty balancing resources between urgent DSS firefighting (resolving daily batch issues), customer onboarding demands, and developing sustainable long-term solutions. - **Knowledge Gaps:** Limited understanding across teams about the full DSS system functionality and the specifics of the DSS-UBM integration process ("black box" perception). New member lacks deep utility industry knowledge (billing structures, supply/distribution concepts). - **Communication Silos:** Disconnect between the DSS team and the UBM team hinders effective troubleshooting of integration issues. - **Time Zone Challenges:** Collaboration with the key Romania-based engineering team requires careful scheduling due to the time difference. ## Commitments - **Joint Strategy Session:** Both parties will participate in the scheduled meeting with Sunny to clarify the strategic approach for DSS-UBM data handling and integration fixes. - **Recurring Sync:** A weekly 30-minute sync meeting (Mondays, 9 AM EST) will be established between the DSS lead and UBM Product Owner for ongoing alignment. - **Meeting Inclusion:** The DSS lead will be added to the existing UBM team's recurring meetings (e.g., Wednesday backlog grooming/demo/planning sessions). - **Technical Team Engagement:** The DSS lead will schedule introductory meetings with key UBM engineers (Alex Chichok, Ruben Furman, Alex Mcish) involved in the DSS integration. - **Working Session Planning:** Explore scheduling a dedicated technical working session involving DSS and UBM engineers to map the integration flow, identify failure points using real batch examples, and define data requirements. Timing needs to accommodate the Romania team (potentially earlier EST).
PPG/Constellation UBM Discussion
## Customer PPG is a client utilizing Constellation's utility bill management platform to process and analyze energy consumption data across multiple locations. The organization is currently in the implementation phase, focusing on integrating historical and ongoing utility bill data into the platform for comprehensive reporting and analysis. Their operations involve managing complex utility accounts (electricity, water, natural gas) across diverse sites, requiring accurate data mapping, timely processing, and robust reporting capabilities to support their internal systems and analytics. ## Success The most significant success identified is the platform's ability to provide detailed utility bill breakdowns through specific report columns like **subcharges** and **usage charges**. These columns effectively delineate costs by commodity type (e.g., water, gas, electricity) even on consolidated bills, enabling PPG to accurately attribute costs to specific services. Additionally, the platform's **integrated help documentation** was acknowledged as a valuable resource for understanding report structures, column definitions (like "current charges" vs. "subcharges"), and core platform concepts such as "bill blocks" and "virtual accounts," aiding PPG in navigating complex data outputs. ## Challenge PPG faces significant challenges primarily centered around **reporting functionality gaps** and **operational delays**. Key reporting issues include: - The inability to automatically append dates to exported report filenames, causing confusion with daily/monthly exports. - Missing critical data fields like the **bill loaded date** in reports (especially the Bill Details report), preventing PPG from filtering or identifying newly added data month-over-month based on upload time rather than invoice date. - Unexpected splitting of full-service bills into separate supply/distribution line items in reports, initially causing confusion about data duplication. - Inconsistent data presentation leading to misunderstandings about report columns and their meanings. Operationally, severe **data processing backlogs** are a major hurdle: - Significant delays (up to months) in processing both historical data uploads and *current* utility bills, far exceeding expected SLAs (e.g., 3 business days). - Unapproved accounts being onboarded via portals without PPG notification or consent. - A large backlog of unmapped virtual accounts and unpaired accounts impacting reporting accuracy, with unclear responsibility (Constellation operations vs. PPG self-service) and slow resolution. - Delays in implementing critical report changes, with initial estimates pushing resolutions to mid-October, jeopardizing PPG's internal system launch deadlines. ## Goals PPG's primary goals for using the Constellation platform are: 1. Achieve fully functional and reliable SFTP-based reporting capabilities, including date-stamped filenames and a bill loaded date field for accurate new data identification. 2. Ensure all utility bill data (both historical backlog and ongoing) is processed and available in the platform accurately and in a timely manner, adhering to agreed SLAs. 3. Resolve specific data mapping and reporting inconsistencies (e.g., unexpected bill splitting, unmapped accounts, unpaired accounts). 4. Implement the "site number" column into the Bill Details report for consistency with other reports. 5. Validate the accuracy of Constellation data against PPG's finalized 2024 records. 6. Establish a clear, expedited timeline for resolving critical issues, aiming for full platform functionality and reliable operation by year-end. 7. Explore enabling Single Sign-On (SSO) for streamlined user access. 8. Understand and potentially utilize admin capabilities for managing exceptions (e.g., mapping virtual accounts) if contractually permitted. 9. Add additional admin users as needed for platform management.
HOLD - Microsoft review of UBM architecture
## Summary ### Current GCP Architecture The existing environment includes a Kubernetes cluster hosting a web interface, internal maintenance services, and PostgreSQL running on VMs (master/read-replica setup). Third-party managed services include Cloud Storage, Redis, and a Cloud Function. Key characteristics: - **PostgreSQL migration driver**: Complexity of VM maintenance prompts a move to Azure-managed PostgreSQL. - **External access requirements**: Customers connect to PostgreSQL read replicas via TLS over TCP using DNS records, with certificate management via Let's Encrypt causing operational overhead due to renewal failures. - **Data volume**: 1.5TB of unstructured data (PDFs/invoices) in Cloud Storage requires migration. ### Proposed Azure Architecture Azure deployment uses logical separation between **green cloud** (internal zone, extension of on-premises network) and **red cloud** (DMZ/external zone): - **AKS Cluster**: Hosted in red cloud with ingress via Application Gateway (reverse proxy) in a dedicated VNet, protected by Palo Alto firewalls. - **PostgreSQL**: Azure Database for PostgreSQL (flexible server) deployed in DMZ with public endpoints due to private endpoint limitations; read-replica access routed through Application Gateway's TLS preview feature. - **Ancillary services**: - Redis Cache and Function App (Python-based budget calculations) placed in green cloud with private endpoints. - Container Registry secured via firewall IP allow-listing. - **Network security**: - Separate firewalls between Application Gateway/AKS and red/green zones. - WAF enabled on Application Gateway for HTTP traffic (preventative mode). ### Migration Improvements and Challenges Key enhancements address GCP limitations while introducing new considerations: - **Certificate management**: Application Gateway's TLS termination preview feature eliminates Let's Encrypt dependency for PostgreSQL custom domains. - **AKS operational model**: Evaluating shift to AKS Automatic (similar to GKE Autopilot) for automated patching/scaling, though it may require cluster rebuild. - **Data transfer**: No existing dedicated connection between GCP/Azure; 1.5TB migration from Cloud Storage pending. - **High-risk element**: Reliance on Application Gateway's preview feature for business-critical TLS proxying to PostgreSQL. ### Security and Connectivity Design Network architecture emphasizes segmentation and controlled access: - **Zone traversal**: AKS (red zone) initiates connections to green-zone services (Function App, Redis) via firewall rules scoped to specific subnets/IPs. - **Shared infrastructure**: Application Gateway and firewalls are multi-tenant resources supporting other applications. - **External access patterns**: - Power BI and customer ETL tools connect to PostgreSQL read replicas. - Public endpoints protected via IP allow-listing and application firewalls. ### Migration Timeline and Next Steps Deployment sequenced with clear milestones: - **Immediate focus**: Development environment setup in Azure (completed for AKS/PostgreSQL in red zone; green zone pending). - **Testing phase**: Validating Application Gateway's TLS capabilities for PostgreSQL this week. - **Production cutover**: - Targeted for late Q4/early Q1 via DNS switch after database/storage synchronization. - Duration depends on code adjustments for Azure compatibility. - **Expert support**: Microsoft team (AKS, PostgreSQL, infrastructure specialists) engaged for: - AKS Automatic configuration deep dive. - Application Gateway security best practices. - Ad-hoc troubleshooting via scheduled sessions (next meeting: 22nd).
Weekly DSS call
## Summary ### Current Operational Challenges The team is facing significant issues with the Data Services System (DSS), causing delays in invoice processing and customer disconnects. Key problems include: - **Blank invoice outputs**: Some invoices processed through DSS are appearing completely blank in the output, bypassing validation checks that should flag such errors. This requires manual reprocessing and investigation into why DSS fails to catch these cases. - **Processing delays**: Invoices face multiple bottlenecks: delayed downloads, backlog in data services (taking days), and extended payment processing times (up to 3 days for funding plus a 3-day SLA for payment). For utilities with 15-day payment terms, this often results in late fees or immediate disconnections. - **Manual intervention for non-BD invoices**: Scanned or mail-based invoices aren’t auto-processed, requiring staff to push them through DSS manually. This creates bottlenecks, especially with limited personnel availability. ### DSS Performance and Volume Metrics Despite challenges, DSS showed improved throughput in August: - **25,000 invoices processed**-a significant increase from previous months, indicating scalability progress. However, persistent issues like blank outputs and processing gaps undermine reliability, with only ~90% functionality deemed acceptable for business needs. ### Technical Improvements and Automation Solutions to streamline DSS workflows were prioritized: - **Automated backlog processing**: A script-based utility (used ad hoc) will be productized to auto-push legacy invoices from the DBS ecosystem into DSS. This will run periodically, throttled to avoid system overload, with duplicate checks to prevent reprocessing conflicts. - **Reprocessing mechanism**: DSS allows single-click reprocessing of failed invoices from its interface. For bulk reprocessing, an API pathway exists but needs enhancement for scalability and user-friendliness. - **Pathway standardization**: Efforts to ensure *all* invoices use DSS exclusively (bypassing legacy systems like "data acquisition") are underway. International invoices remain excluded per existing criteria. ### Reporting and Visibility Enhancements Better monitoring tools are critical for issue tracking and prioritization: - **Power BI dashboard upgrade**: The existing DSS Prod report lacks vendor-name mapping and granular export capabilities. A new tab will show invoices processed *by vendor* and *created-by* user to track manual upload volumes and identify bottlenecks. - **Centralized ticketing system**: An automated tracker (via ADO) will classify DSS issues (e.g., LLM errors, hard-code fixes, upstream/downstream problems). This replaces ad hoc chat-based reporting, providing real-time visibility into recurring bugs versus one-off errors. ### System Stability and Future Steps Addressing reliability remains urgent: - **Root-cause analysis**: Blank invoices and "ready to send" failures require deeper investigation to identify common failure points (e.g., validation gaps). Examples of faulty invoices will be analyzed to replicate and fix issues. - **Resource constraints**: Staff shortages (e.g., Mercer’s access issues, Matthew’s absence) exacerbate delays. Immediate support is needed to maintain processing capacity during critical periods.
Report
## Summary ### Core Focus on Invoice Timeliness Metrics The primary objective centers on optimizing invoice processing by shifting focus from download volume to timeliness. Merely tracking the number of invoices downloaded daily is insufficient; the critical metric is how many days overdue these downloads are relative to their expected availability. For instance, downloading 10,000 invoices becomes problematic if they are 50 days late, whereas downloading one invoice on time is acceptable. This approach ensures maximum processing time and avoids penalties. - **Period downloads as a starting point:** The "global staff performance report" currently tracks manual invoice downloads from portals via FDG Connect, representing the team's activity. - **Timeliness over volume:** The real value lies in measuring the delay between an invoice's expected availability date and its actual download date, as this directly impacts processing efficiency and late fee avoidance. - **Data accuracy challenge:** Initial reporting issues arose because some staff performing downloads (e.g., Faisal) were not reflected in the report, skewing the perceived download volume and timeliness data. ### Current Reporting Limitations & Data Issues Existing reports, migrated from SSRS to Power BI (using Microsoft Report Builder), provide basic data but lack sophistication and suffer from data quality problems. The "Expected Build Date" report attempts to calculate timeliness but is hampered by irrelevant or "dead" invoice records cluttering the dataset. - **Legacy system constraints:** Reports originate from SSRS and were ported to Power BI via Microsoft Report Builder, described as a "funky" and complex tool using nested expressions. - **"Expected Build Date" calculation:** This metric uses vendor billing cycle days and the last invoice date to predict when the next invoice should be available for download. The difference between this expected date and the actual download date indicates delay (e.g., 14 days past due). - **Data noise problem:** The report includes numerous irrelevant or inactive invoice records ("dead" invoices), making it difficult to isolate and analyze actionable, current data on genuine delays. ### Advanced Metric Considerations Moving beyond basic delay measurement requires incorporating factors influencing payment deadlines and processing urgency. The timeliness threshold for concern varies significantly based on vendor payment terms and methods. - **Defining urgency tiers:** Discussions proposed categorizing delays into tiers (e.g., Green: 1-2 days delay, Yellow: 2-5 days, Red: >5 days), acknowledging that the impact of delay depends on specific vendor terms. - **Payment terms complexity:** The actual risk associated with a delay depends on the invoice due date (which may be days or weeks after the invoice date) and the payment method accepted by the vendor (e.g., instant online payment vs. mailed check). - **High-risk scenarios:** Some vendors impose severe consequences for late payment, such as service disconnection if payment isn't received by a strict deadline (e.g., by the 26th for a bill available on the 1st), necessitating prioritized handling. ### Impact of Payment Method on Processing The method required to pay an invoice (e.g., credit card, e-check, physical check) critically affects the total processing time needed and thus the acceptable download delay window. Vendors requiring slower payment methods demand earlier invoice download and processing. - **Processing time dependencies:** Traditional processing assumes ~3 days for download, ~3 days for processing/audit, and ~2 days for payment initiation (~8 days total). This timeline is insufficient for vendors requiring mailed checks, which can add up to 14 days. - **Prioritization need:** Vendors with short payment windows or slow payment methods (like checks) require high-priority download and immediate processing upon availability to meet deadlines. - **Identifying critical vendors:** Determining which accounts fall into this high-priority category requires analyzing historical payment method data per vendor, focusing on the most recent methods used. ### Path to Improved Reporting & Analysis Significant potential exists to enhance reporting accuracy, relevance, and actionability by addressing data quality, leveraging new capabilities, and focusing on the right metrics. - **Data cleansing imperative:** Cleaning the dataset within reports (e.g., removing "dead" invoices) is essential for accurate delay analysis and meaningful tiered categorization. - **Leveraging modern tools:** While past analysis relied on cumbersome Excel exports and transformations, the availability of API access to systems like Bill.com presents an opportunity to build more efficient, automated querying and reporting tools for real-time insights. - **Focusing on actionable metrics:** Future reporting efforts should prioritize generating the "days overdue" metric and integrating vendor-specific payment terms and methods to enable true risk-based prioritization of invoice processing.
[EXTERNAL]FW: [EXTERNAL]Constellation Navigator - SOC Report Touchpoint
## Current SOC1 Type 2 Reporting Status for UBM Progress is being made on the interim testing phase for the first-year SOC1 Type 2 report covering the period ending September 30th. Significant headway has been achieved on IT General Controls (ITGCs) and customer setup controls. However, testing for business process controls, particularly those related to billing operations, is pending due to delays in obtaining necessary support evidence. - **Interim Phase Focus:** Testing was strategically divided into interim and update phases to distribute the workload. The goal is to complete interim testing for all controls, including the delayed billing operations evidence, within September. - **Critical Dependency:** Timely receipt of the outstanding billing operations evidence in September is crucial. Delays will impact the subsequent update testing phase and potentially delay the final report issuance. - **Update Phase Plan:** Upon successful interim completion, update requests (evidence and inquiries) will be issued in October. The reduced sample sizes resulting from interim work will streamline this phase. - **Target Timeline:** Aiming to conclude update testing before the Thanksgiving break, followed by the reporting stage (narratives, drafts). The objective is to finalize and file the report shortly after Thanksgiving, contingent on evidence timelines. - **Reporting Preparation:** Drafting Section 3 (system description) of the report is prioritized for review before Thanksgiving. A template for the management assertion letter will be provided for tailoring. ## PayClearly Subservice Organization Considerations PayClearly, a vendor handling bill payments, will not provide a SOC 1 Type 2 report within UBM's current reporting window (their report has an October 15th date). While efforts were made to encourage alignment, PayClearly cited resource constraints and an acquisition impacting their timeline. - **Impact Mitigation:** A specific control within UBM's SOC 1 report addresses the lack of timely third-party attestation from PayClearly. This control involves UBM's review and follow-up on payment exceptions communicated by PayClearly. - **Control Rationale:** This monitoring control provides assurance over PayClearly's performance despite the absence of their SOC report, demonstrating UBM's oversight. The control was implemented during the Type 1 phase and will continue to be tested. - **Future Alignment:** Encouragement was given to leverage UBM's position as a customer to push PayClearly for future SOC reports aligned with UBM's September 30th period end, though it's not deemed critical given the existing mitigating control. ## Future State Planning for UBM SOC 1 Planning for the next SOC 1 Type 2 period (October 1 to September 30) is underway, building on the current year's experience. - **Process Refinement:** A debrief session is planned for February/March 2024 (post-current report filing) to gather feedback and identify enhancements for a smoother process next year. - **Testing Approach:** The strategy of interim and update testing phases will likely continue to manage workload distribution effectively. - **System Migration Impact:** The planned migration of UBM from GCP to Azure (targeting Q4 2023, pending Dev environment completion and resource availability) necessitates control impact analysis. - **Primary Impact:** ITGCs are expected to be most affected due to the infrastructure change. Some automated business process controls reliant on specific infrastructure objects may also be impacted. - **Testing Strategy:** Sequencing testing around the migration date is critical. Controls impacted in the old environment need testing pre-migration, while controls in the new environment will be tested post-migration. An inventory of impacted controls is required. - **Evidence Efficiency:** Efforts are ongoing to improve evidence gathering efficiency, such as developing automated reports (e.g., Power BI) or database views to replace manual email-based data pulls, particularly for user access confirmations. ## Constellation Navigator & Carbon Accounting (SOC 2) Discussions regarding SOC reporting for the broader Constellation Navigator platform and specifically for carbon accounting functionalities are in preliminary stages. - **Carbon Accounting Focus:** Initial discussions about SOC readiness for carbon accounting data processing pivoted to assessing the effort required for BWC to perform a SOC 2 examination specifically over carbon accounting. - **Current Status:** A Statement of Work (SOW) for a potential SOC 2 examination over carbon accounting has been shared with Constellation (Maurice/Sunny) but requires further internal review and discussion before proceeding. A dedicated meeting is needed to review the SOW details.
UBM Deep Dive
## DSS Invoice Processing Challenges The primary focus was addressing critical issues within the Data Services System (DSS) impacting invoice processing timeliness. Key problems include: - **Lack of Clear Status Tracking**: DSS lacks definitive statuses (e.g., "complete," "cancel processing") for invoices, causing confusion between systems like DDAS (Data Acquisition System). This results in hundreds of invoices appearing "stuck" in DSS despite being processed elsewhere. - **Visibility Gaps**: Operators cannot easily distinguish between actively processing invoices versus those requiring intervention. Completed or manually resolved invoices remain visible, cluttering queues and obscuring true backlog. - **System Mismatches**: When invoices are processed externally (e.g., in DDAS), DSS retains them in "hung states," risking duplicate processing or missed deadlines. ### Proposed Immediate Solutions - **Status Overhaul**: Introduce explicit statuses like: - **Complete**: For invoices successfully sent to DDAs, UBM, or audit. - **Cancel Processing**: To remove irrelevant/invalid invoices from active queues without deletion. - **Automation Enhancements**: Implement automated retries for failed processes and systematic kickbacks to DDAs for error resolution. - **Dashboard Optimization**: Filter out completed/canceled invoices to create a clean view of active workloads, prioritized by SLA breaches (e.g., prepay invoices overdue by 10+ days). - -- ## Reporting and Monitoring Tools Power BI reports are critical for tracking workflow health: - **System Status Report**: Highlights queues color-coded by SLA adherence (red = overdue). It shows: - 1,500+ invoices in DSS, with ~300 prepay (high-priority) items needing urgent resolution. - Mismatches between DSS and DDAS counts due to unmarked completions. - **DSS Prod Report**: Reveals 44,000 "completed" invoices but 1,500 stuck in audit states. Toggling "visible=true" exposes hidden items manually marked as irrelevant. - **DSS Status Report**: Identifies invoices in abnormal states (e.g., IPS failures, corrupt PDFs) since early September, requiring engineering intervention. ### Data Discrepancy Risks Manual workarounds (e.g., hiding invoices) distort reporting. True resolution requires synchronizing DSS statuses with DDAS outcomes to align Power BI metrics with operational reality. - -- ## Team Structure and Operational Meetings - **Staffing Constraints**: Matthew (key engineer) remains on leave, delaying system fixes. New hires start in mid-September but require training. - **Critical Recurring Meetings**: - **Daily Stand-ups** (Corey Finch): Focus on Kanban-based task prioritization (e.g., fixing DSS failures). - **Issue Reviews** (Tue/Thu): Teresa-led sessions on systemic blockers. - **Metrics Sessions** (Mon/Wed/Fri): Performance tracking against SLAs. - **Knowledge Gaps**: Documentation on DSS states is sparse. Garov and Rachel (Data Services) hold institutional knowledge for workflow clarification. - -- ## Strategic Product Direction A pivot toward **contract management** was proposed to enhance customer value: - **Problem Space**: UBM customers struggle with tracking contract-to-location mappings, renewal dates, and terms across portfolios. - **Solution Hypothesis**: Contract management could bridge UBM’s utility data with Arise’s transactional platform, creating a sticky product layer. - **Next Steps**: - Conduct customer research to validate pain points. - Develop prototypes for usability testing. - Explore integration with UBM’s location/account hierarchy. **Note**: This initiative aims to address the "front-end gap" where current tools lack customer-centric workflows despite backend efficiency. - -- ## Technical Debt and Acronyms - **BDE (Big Data Energy)**: Third-party service for automated invoice downloads. - **SSRS (SQL Server Reporting Services)**: Deprecated reporting system migrated to Power BI. - **IIS (Internet Information Services)**: Legacy web server needing modernization to PaaS (e.g., Azure App Services) for CI/CD and scalability. - **Power Automate**: Previously used for RPA (robotic process automation) in web downloads; abandoned due to inefficiency (2+ months per utility onboarding).
DSS Planning Meeting
## Operational Challenges with DSS Implementation The implementation of the Document Scanning System (DSS) has created significant operational disruptions, including inaccurate billing data, system inefficiencies, and resource strain. - **Data inaccuracies and system clutter**: DSS generated redundant or incorrect meter entries (e.g., adding 20+ water meters to a single bill), causing bills to skip audits or require manual correction. This resulted in customer-facing errors like late fees, disconnect notices, and emergency payments. - **Resource overload**: Teams are working double overtime to manage backlogs, manually process stuck bills, and correct DSS errors. The system’s inefficiencies consume excessive memory and create redundant workflows, such as sending individual bills in batches. - **Root cause analysis**: DSS replaced a previously functional manual process but failed to integrate historical data templates and account configurations. Resource reductions (contractors and employees) coincided with increased workload from UVM’s growth, exacerbating the crisis. - -- ## Proposed Solutions and Process Improvements Addressing DSS issues requires coordinated cleanup efforts, improved error tracking, and strategic resource allocation. - **Prioritized issue resolution**: Focus on high-impact errors first (e.g., mismatched totals, skipped line items) through a centralized tracking system. A dedicated point person will consolidate reports to avoid duplicate tickets and ensure critical fixes are escalated. - **Data cleanup and system adjustments**: Explore bulk operations to delete erroneous DSS-generated data while preserving validated historical configurations. This aims to reduce manual intervention and restore workflow efficiency. - **Enhanced monitoring tools**: Develop a unified dashboard to replace scattered trackers (Excel, Power BI), incorporating real-time filters for backlog management. This will reduce coordination overhead and accelerate developer prioritization. - -- ## AI Usage and Governance at Navigator Navigator utilizes AI primarily for bill processing via DSS, with strict governance protocols but emerging compliance considerations. - **DSS’s AI integration**: Leverages OpenAI’s API (hosted on Microsoft Azure) for OCR and data extraction from utility bills. The process underwent Responsible AI (RAI) review, ensuring data is not used for model training and includes human auditing. - **Compliance and disclosure**: - **Third-party safeguards**: Microsoft/OpenAI contracts include data anonymization and aggregation clauses, but further validation is needed to confirm adherence to privacy standards. - **Customer transparency**: Proactively disclosing AI use in privacy policies or terms is recommended, especially for bill data processing. Legal is reviewing AI addendums for customer agreements. - **Internal AI tools**: Only approved, closed versions of ChatGPT/Copilot are permitted. The SAM chatbot uses predefined SQL queries-not AI-to retrieve customer data, minimizing risk. - **Future AI use**: No immediate plans for new AI features, though market pressure may drive future roadmap considerations (e.g., predictive analytics). All projects require RAI committee approval.
AP File Standardization & Banyan
### Standard AP File Export Requirements The meeting centered on defining a standardized AP file export format that includes all essential invoice details and associated attributes, regardless of client-specific custom fields. This approach aims to provide a universal data foundation that clients can adapt to their ERP systems (e.g., Oracle, SAP, JD Edwards) or property management platforms (e.g., AppFolio, Entrada, RealPage, Yardi). #### Customization Expectations and Limitations - **Client responsibility for mapping**: Clients must handle data alignment within their ERP systems post-export, such as relocating columns or reformatting fields to match their internal requirements. - **Inherent need for tweaks**: Even standardized files may require adjustments due to ERP version differences or unique client configurations (e.g., Oracle 6.4 vs. 16.1 column mappings), but these should not necessitate fundamental changes to the core export structure. ### Monetization and Resource Allocation - **Charging for customizations**: Development efforts for client-specific modifications (e.g., concatenating fields or altering bulk import formats) must be monetized, with a proposed $2,500 fee to prevent resource drain. - **Impact on development sprints**: Unmanaged customization requests currently consume disproportionate developer bandwidth, delaying other priorities. ### Standardization Progress and Scope - **Targeted coverage**: The upcoming standard AP file (scheduled for October 1 release) is designed to address ~80-90% of client needs across major financial systems. - **Focus on core ERPs**: Emphasis shifted toward prioritizing compatibility with true financial ERPs (Oracle/SAP/JD Edwards) over property management systems, as the latter often involve intermediary processing (e.g., Banyan exporting to Yardi). ### Guardrails for Client Expectations - **Clear documentation**: A publicly available standard specification (e.g., "Yardi Version 1.3") will define included features, eliminating ambiguity about "free" support. - **Sales/CSM alignment**: Prevent overpromising by clarifying that "AP file development" refers only to standardized exports, with customizations requiring separate scoping and fees. ### Implementation Challenges - **Timeline risks**: Current processes can delay customization delivery by 3-5 weeks due to scheduling conflicts and developer availability. - **Versioning complexities**: While core fields remain consistent across ERP versions, edge-case adjustments (e.g., field repositioning in updates) may still arise. ### Path Forward - **Finalize baseline standards**: Accelerate deployment of the October 1 standard to reduce recurring "from-scratch" file builds. - **Define customization boundaries**: Develop a "top 10" list of common paid customizations (e.g., logic-driven field concatenation) to guide sales and client discussions. - **Resource allocation**: Dedicate a fractional developer (e.g., 25% capacity) exclusively for streamlined customization work, funded by fees.
DSS Planning Meeting
## Summary The discussion centered on addressing critical operational challenges with the DSS (Data Services Solution) implementation, which caused significant invoice processing backlogs and systemic inefficiencies. Key problems included the abrupt, non-phased rollout of DSS, leading to 30,000+ pending invoices, compounded by four interconnected issues: - Inadequate DSS validation logic - Utility API download failures - UBM (Utility Bill Management) system incompatibility with DSS outputs - Personnel workflows misaligned with the new process The conversation emphasized shifting to a structured, human-audited review phase for DSS outputs to manage the backlog, alongside implementing a ticketing system for issue triage. Long-term solutions discussed involved phased fixes to DSS, UBM optimizations, and expanding utility partnerships. Additionally, plans to develop a public DSS API and a unified operational dashboard were highlighted to reduce manual efforts and improve customer self-service. ## Wins - **Jira Integration Solution**: A prototype form-to-Jira automation was created to streamline DSS/UBM issue reporting, replacing chaotic Teams/email workflows. This allows UVM teams to submit feedback (bugs/features) with attachments, auto-generating tickets for engineers. - **API Strategy Validation**: Early development of a public DSS API endpoint aims to offload manual CSM tasks (e.g., bill data extraction), enabling customer self-service for CSV/JSON/XML outputs. ## Issues 1. **DSS Rollout Failure**: - Non-phased deployment caused systemic bottlenecks. - Lack of pre-production testing for real-world edge cases (e.g., utility-specific nuances). 2. **Backlog Escalation**: - 30,000+ invoices stuck due to DSS errors, UBM incompatibility, and download failures. 3. **Process Fragmentation**: - UBM batch processing rejects entire batches for minor formatting issues (e.g., inconsistent capitalization). - Manual workflows dominate post-DSS review, with no centralized ticketing. 4. **Tooling Gaps**: - No dedicated Jira board for DSS feedback; engineers handle issues ad hoc via Teams. - Proliferation of 17+ disjointed trackers (Excel, Power BI) for invoice status monitoring. ## Commitments - **Immediate DSS Triage**: - Deploy two Jira-integrated forms (DSS and UBM) by owner. Teams will submit issues via these forms, with automated ticket creation and triage. - **Operational Workflow Shift**: - Mobilize 20+ UBM personnel to focus exclusively on human-audited, post-DSS invoice reviews. - **Strategic Alignment**: - Review all operational trackers/dashboards to consolidate into unified views. - Finalize requirements for a customer-facing DSS API (owner: Engineering).
[EXTERNAL]FW: [EXTERNAL]Constellation Navigator - SOC Report Touchpoint
## Prospect The prospect is Maurice Goodman, Product Owner of the Constellation Navigator Carbon Accountant application. He is joined by colleagues David (Product Owner of the Glimpse energy savings software) and Faisal (Product Owner of the Utility Bill Management system). Collectively, they manage distinct applications under the Constellation Navigator brand. Their primary challenge involves maintaining SOC 1 and SOC 2 compliance across these applications efficiently, particularly as David's product line seeks SOC 2 compliance. They currently use Vanta for SOC 2 and handle SOC 1 more manually, facing inefficiencies in evidence collection, audit coordination, and vendor management (e.g., using Cobalt for separate penetration testing). Their goal is to streamline compliance processes, reduce manual effort, and leverage automation while ensuring robust security postures for customer trust. ## Company Constellation Navigator operates as an umbrella brand housing multiple distinct applications: - **Carbon Accountant:** Focused on carbon accounting (SOC 2 compliant). - **Utility Bill Management:** Manages utility billing data (SOC 1 compliant). - **Glimpse:** Provides energy savings calculation software (seeking SOC 2 compliance). While operating under the same brand, each application has its own: - **Technical Stack:** Primarily AWS, with some components on Azure or GCP, and plans to consolidate towards Azure. - **Code Repositories:** Separate GitHub repositories per application. - **SOC Reports:** Individual SOC 1 or SOC 2 reports issued per application, as customers expect application-specific attestations. Despite these differences, they share common **Constellation-wide policies** (e.g., HR onboarding, security baselines) and aim to leverage synergies where possible. The company faces challenges in managing compliance across these siloed applications, coordinating evidence collection, responding to security questionnaires, and presenting a unified security posture to customers. ## Priorities The prospect's key priorities for a compliance solution center on efficiency, consolidation, and enhanced capabilities: 1. **Unified SOC 1 & SOC 2 Management:** Seeking a single platform to manage both frameworks simultaneously, leveraging overlapping controls (e.g., user access reviews, MFA) to avoid duplication of effort and achieve economies of scale. 2. **Automated Evidence Collection & Audit Support:** Prioritizing robust automation for evidence gathering (integrating with GitHub, cloud providers like AWS/Azure/GCP), clear audit trails for auditors, and streamlined audit coordination to replace manual processes and spreadsheets. 3. **Integrated Penetration Testing:** Replacing the current separate vendor (Cobalt) with an integrated solution where penetration testing scoping, execution, vulnerability tracking, remediation evidence, and reporting are centralized within the platform and linked directly to relevant controls. 4. **Customer-Facing Trust Center:** Implementing a branded trust center to proactively showcase compliance status (SOC 1, SOC 2) and security controls to customers, reducing repetitive security questionnaires. This requires features like access request management and potential NDA workflows to satisfy legal/security concerns about information disclosure. 5. **AI-Powered Efficiency:** Utilizing AI capabilities, particularly for automating responses to security questionnaires by leveraging existing platform data and a trained knowledge base, significantly reducing manual effort in RFP processes. 6. **Optimized Multi-Application Setup:** Determining the most efficient platform configuration (e.g., one instance with filtered views per application vs. separate instances) to manage compliance for their three distinct applications while still leveraging shared policies and controls where applicable. 7. **Proactive, Auditor-Led Consultancy:** Valuing embedded consultancy from experienced (ex-auditor) professionals offering proactive guidance, audit readiness support, and ongoing compliance management throughout the year, not just during audit periods, without hourly limitations.
DDAS/FDG Logic Overview
## Summary ### Operational Challenges and Team Dynamics The organization faces significant operational hurdles, particularly within the UBM (Utility Bill Management) product, including manual processes, communication silos, and a lack of scalable solutions. A major pain point is a backlog of ~40,000 invoices, largely attributed to a recent AI processing rollout that introduced errors and delays. This has led to customer frustration, threats of contract cancellations, and internal burnout. Teams are fragmented, with feedback often lost between departments (e.g., Customer Success Managers to Product/Dev teams), and leadership is perceived as overly focused on firefighting rather than strategic improvements. ### Technical and Process Limitations Critical system limitations hinder efficiency and customer satisfaction: - **Reporting Inflexibility**: Customers cannot self-serve custom reports (e.g., adding date columns or splitting costs across locations). Each change requires developer intervention, taking months to implement. Example: PPG requested date-stamped filenames and new columns for payment tracking-a universal need deemed non-customizable. - **Payment File Setup**: Creating custom AP files for clients involves redundant meetings and manual spec reviews. Example: Park National Bank’s simple 3-column requirement still required multiple cross-team calls due to unclear workflows. - **Legacy System Dependencies**: DDAS (legacy) and FDG Connect (modern) systems operate in parallel. DDAS remains essential for tasks FDG Connect can’t handle (e.g., processing complex bills), forcing dual maintenance and user reliance on outdated tools. ### System Architecture and Workflows The bill-processing pipeline involves multiple disjointed systems: 1. **Data Ingestion**: Bills enter via: - Manual uploads to **FDG Connect** (for web portal invoices). - Legacy **DDAS** for exceptions (e.g., brewery bills). - Third-party services (e.g., Utility API) for automated feeds. 2. **Processing**: Uploaded bills enter a batch queue in **DSS (Data Services System)**, where an AI/LLM attempts automated extraction. Successfully processed bills proceed; others flag errors. 3. **Error Handling**: ~23,000 bills are stuck in "Ready to Send" status due to AI misclassifications. These require manual review/correction in FDG Connect before reprocessing-no auto-retry exists. 4. **Output**: Validated bills push to UBM for end-user access. ### Feedback and Prioritization Gaps A core cultural/operational issue is the absence of a centralized feedback mechanism. Enhancements (e.g., dynamic reporting) are emailed ad-hoc to individuals like the former product owner, who often dismissed them without pipeline visibility. This discouraged innovation and left CSMs feeling unheard. Leadership prioritization is misaligned with customer needs, focusing heavily on AI bill processing while neglecting foundational improvements like self-service customization. ### Strategic Opportunities Key areas for transformation were highlighted: - **Automate Scalable Solutions**: Replace manual steps (e.g., bill downloads, payment file setups) with self-service tools. Example: A drag-and-drop report builder would reduce dev backlog and accelerate customer customization. - **Centralize Feedback**: Implement a structured intake system (e.g., repository + triage process) to capture and prioritize CSM/customer insights. - **Break Silos**: Foster cross-team transparency (e.g., direct CSM access to product/dev) to accelerate issue resolution. - **Modernize Legacy Components**: Fully migrate DDAS functionality to FDG Connect to eliminate redundancy and streamline training.
Update Kerry's File
## File Replication and Interface Testing Requirements The meeting focused on confirming Constellation's ability to replicate a specific text-based sample file format required for SAP integration, with discussions revealing complexities around file structure customization and data mapping. Key points included: - **File format replication**: Constellation confirmed technical feasibility but highlighted challenges in generating non-CSV text files with fixed-width spacing and specific header/detail line structures. - **Data mapping dependencies**: A comprehensive mapping document is required to align data elements (e.g., vendor numbers, posting periods, document dates) between systems, as the sample file's structure lacks clear field-length specifications. ## File Transfer Mechanism and Infrastructure Clarification was sought on SFTP file delivery logistics, exposing a critical disconnect in infrastructure responsibilities: - **SFTP hosting**: Constellation confirmed they *do not host SFTP locations*; BP must provide a Sterling-managed SFTP endpoint for file retrieval. - **Sterling middleware role**: BP's Sterling middleware would pull files from Constellation's output directory, requiring configuration details (credentials, directory paths) yet to be finalized. ## Project Timeline and Deployment Windows Urgent deadlines for interface testing and deployment were discussed, with alignment on phased approaches: - **Testing phase**: Target of September 17th for non-production environment tests, contingent on receiving sample file specifications by early September. - **Production deployment**: Mandatory completion by mid-November 2023 to align with BP's system maintenance windows and year-end operational freezes. - **Critical path**: File replication development must conclude by October to accommodate end-to-end testing before the November deadline. ## Technical Specifications Deep Dive A live walkthrough of the sample file revealed unresolved structural complexities: - **Header and line-item formatting**: The file uses "H"/"D" identifiers with date-timestamp prefixes (e.g., `20250320`), but spacing between fields requires explicit character-count specifications. - **Data element ambiguities**: Vendor number lengths (10-digit requirement vs. sample inconsistencies) and date-field logic (document date vs. posting period) need clarification to prevent payment term miscalculations. ## Resolution Path and Follow-Up Actions Concrete next steps were defined to unblock progress: - **Mapping documentation**: BP committed to providing a field-by-field breakdown of the sample file (including data types, lengths, and SAP correlations) by early next week. - **Feasibility assessment**: Constellation will finalize effort estimates by September 5th after reviewing the mapping document, with a focus on custom text-file generation timelines. - **Sterling configuration**: BP to resolve SFTP access logistics internally, confirming whether Constellation must push files to a BP-hosted location or if Sterling can pull directly.
Backlog in Data Services - Please invite others I missed
## Summary The meeting focused on transitioning responsibilities and providing comprehensive context about tools, processes, and ongoing challenges. Key topics included access verification for collaboration platforms (Slack, HubSpot, Jira, Confluence), documentation status, and operational workflows. Significant time was dedicated to reviewing Data Services-a legacy invoice processing system acquired in 2020-which faces critical issues including technical debt, inadequate documentation, and a backlog of 36,000 unprocessed invoices. The system’s modernization efforts using LLMs and Azure Functions were discussed, alongside challenges like inconsistent performance and staffing gaps. A live demonstration illustrated the invoice acquisition and processing workflow, highlighting inefficiencies in task prioritization and late-payment risks for prepaid customers. ## Wins - Positive feedback received on recent system enhancements. - Successful migration toward modern tech stacks (Python, Node.js, Azure Functions) for Data Services. - Outsourcing web downloads to a cost-effective offshore team to accelerate invoice acquisition. - Integration of Microsoft Document Intelligence and LLM-based processing improving OCR accuracy despite scalability challenges. - Reduction in invoice backlog observed following targeted batch processing efforts. ## Issues - **Access & Tools**: - Initial access issues with Pair AI Confluence; Slack underutilized for team communication. - Jira backlog contains vague or underspecified tickets; outdated projects clutter the workspace. - **Data Services**: - Severe documentation gaps and legacy architecture (VM-based SQL, monolithic design). - Critical backlog of 36,000 invoices, risking late fees (e.g., 10-day delays for prepaid accounts). - Understaffed team: Only one senior engineer (on leave) and a mid-level contractor handling core operations. - LLM batch processing SLA breaches (24+ hours vs. 10-minute historical average). - **Process Gaps**: - Inconsistent use of Smartsheet for emergency payment logging and onboarding tracking. - Vendor-level (not account-level) invoice tracking causing erroneous task alerts. - Overreliance on Google Drive for file sharing instead of corporate SharePoint. ## Commitments - **Access Provisioning**: - Granting access to Slack’s DevOps channel, Data Services repositories, and Smartsheet project trackers. - Sharing historical Google Drive folders for context (despite migration to SharePoint). - **Documentation Handoff**: - Providing a comprehensive UBM transition document detailing system quirks, vendor contacts (e.g., BDE for invoice automation), and team dynamics. - Sharing Whimsical workflow diagrams for invoice processing and system architecture. - **Operational Follow-ups**: - Prioritizing prepaid customer invoices to mitigate financial risk. - Scaling batch processing of invoices overnight to reduce backlog. - Reviewing vendor integration options (Utility API, Arcadia) for cost optimization. - **Coordination**: - Connecting with the Scrum Master (Andrea) and Tech Lead (Ruben) for Jira backlog refinement. - Meeting with Jay (UK-based) to align on Q4 planning and sprint commitments.
UBM Planning
## Summary The discussion focused on onboarding and transitioning responsibilities for the new Product Owner role covering UBM (Utility Bill Management) and related systems. Key objectives include gaining system access, understanding the complex integration between UBM and Data Services (DSS), and establishing ownership over the data pipeline. The transition from manual processes to LLM-driven automation is underway but faces challenges, particularly in normalizing data schemas between legacy systems. Upcoming priorities include deep dives with technical leads (Shake for DSS ingestion and Gaurav for LLM integration), reviewing sales materials to align with customer expectations, and planning an in-person team meeting to foster collaboration. The long-term vision involves consolidating DSS functionality into UBM, exploring monetization of the LLM API, and enhancing advisory capabilities using UBM data. ## Wins - Successful reduction in recurring LLM issues, indicating improved model stability. - Automation of invoice processing via LLM has significantly reduced manual backlog efforts. - Access to critical systems (Confluence, Jira, roadmap) has been largely granted, accelerating onboarding. - The acquisition of Emodeler introduces energy-efficiency modeling capabilities to leverage UBM data for advisory services. - A new offshore team in Mexico is being onboarded to handle invoice collection, further automating data ingestion. ## Issues - **Data Services Ownership Gap:** No dedicated product owner for DSS, leading to unclear accountability and prioritization. - **System Integration Complexity:** Manual, error-prone processes exist between DSS and UBM due to historical schema mismatches, causing data flow issues and QA bottlenecks. - **Terminology Ambiguity:** "UBM" is often used ambiguously, sometimes referring solely to the UI layer and sometimes including DSS functionality. - **Technical Silos:** Separate development environments (GCP for UBM, non-Azure for DSS, AWS for other components) and distinct teams (Cognizant for UBM, separate for DSS) hinder collaboration. - **Customer Feedback Backlog:** Historical lack of prioritization for customer-requested features (e.g., UI enhancements) has created an accumulation in the backlog. - **Upstream Data Gaps:** Reliance on manual PDF collection from utility portals remains a bottleneck, limiting the benefits of downstream LLM processing. - **Issue Routing Inefficiency:** Technical problems are often reported to non-technical teams (e.g., Ops, CSM) instead of being routed directly to the responsible developers. ## Commitments - **Schedule Technical Deep Dives:** Meetings will be arranged with Shake (DSS lead) and Gaurav (LLM lead) to understand their systems and challenges. - **Plan In-Person Meeting:** Organize a face-to-face session for the broader UBM/DSS team to improve collaboration. - **Review Sales Materials:** Provide access to sales presentations to understand current customer positioning and expectations for UBM. - **Establish Recurring Sync:** Implement a weekly check-in (targeting Tuesdays and Thursdays) for ongoing alignment and support. - **Conduct End-to-End Walkthrough:** Schedule a session to trace a utility bill's journey from source ingestion through DSS processing to UBM presentation, identifying friction points. - **Resolve Access:** Address the laptop setup documentation access issue via follow-up with the IT team.
UBM Planning
## Summary The meeting served as an introductory discussion focused on onboarding, strategic context for the Utility Bill Management (UBM) product, and key priorities. The new team member is actively syncing with existing leads (Sunny and Chris) to get up to speed before Chris transitions roles. There's enthusiasm about leveraging fresh perspectives to evaluate the platform's architecture and development efficiency, with a review of findings targeted for late September or October. The broader Navigator business unit was outlined, highlighting its origin as a Constellation initiative to enhance customer value beyond power/gas sales through data ownership. Navigator has shown significant growth, targeting $16.6 million in gross margin this year, up from $10 million last year. The scope also includes new product incubation (e.g., behind-the-meter solutions, data center work) and commercializing technologies from Constellation's clean tech venture fund (CTV). Key priorities for UBM over the next 6-12 months were emphasized: 1. **Re-balancing Development Focus:** Shifting from a current heavy tilt towards operational/feature support (estimated 80%) back towards strategic innovation and new development to stay competitive. 2. **Enabling Self-Service:** Developing capabilities allowing customers to access information and perform tasks independently, reducing reliance on custom reporting and manual support. 3. **Streamlining Data Acquisition:** Making UBM the central hub for data ingestion with the goal of eventually sunsetting the separate data acquisition team and process. 4. **Long-Term Platform Unification:** Moving towards a single, unified platform experience with modular capabilities, integrating UBM and the carbon accounting platform, rather than maintaining separate systems with single sign-on. The new team member shared relevant experience in platform unification from a previous role involving multiple acquisitions, aligning with the long-term vision. Plans were discussed for an in-person meetup in Baltimore in September and establishing regular sync meetings to discuss strategy and progress. ## Wins - **Navigator's Growth Trajectory:** Significant progress confirmed with gross margin increasing from $10 million last year to a $16.6 million target this year, validating the business line's potential. - **Successful Acquisition & Integration Strategy:** Navigator's foundation built through strategic partnerships and acquisitions (like UBM) is acknowledged as a successful approach. - **New Team Integration:** Positive onboarding progress noted, with the new team member engaging in deep dives with key personnel (Sunny, Chris) to accelerate understanding. ## Issues - **Development Resource Imbalance:** A significant imbalance exists, with development efforts heavily skewed (estimated 80%) towards operational support and customer feature requests, leaving insufficient capacity (estimated 20%) for strategic innovation and new development. - **Limited Self-Service Capabilities:** The current platform requires excessive manual intervention for customer reporting and data access, hindering scalability and customer autonomy. - **Inefficient Data Acquisition:** The existing separate data acquisition process is seen as cumbersome and a candidate for sunsetting, needing integration directly into UBM. - **Fragmented Platform Experience:** The coexistence of separate platforms (UBM, carbon accounting) creates a disjointed user experience, necessitating a longer-term unification strategy. ## Commitments - **Strategic Review Session:** A dedicated session will be scheduled for late September or October to review findings from the new team member and Sunny regarding the platform's architecture, development efficiency, and strategic direction. - **In-Person Meetup:** An in-person meeting in Baltimore is planned for September to facilitate broader team introductions and collaboration. - **Establishing Recurring Syncs:** Regular meetings involving the new team member, Sunny, and leadership will be set up to systematically discuss progress and strategy for UBM.
UBM Planning
## Platform Background and Evolution UBM originated as Paradot AI, acquired by Constellation in late 2019 when it was pre-revenue with minimal features. The platform was designed to help Constellation maintain customer engagement beyond transactional energy contracts by leveraging monthly utility bill data. Initial capabilities included bill viewing, location attribute management, and AP file generation for payments. Novel features at launch included an AI-driven query system (similar to early voice assistants) for data interrogation and basic dashboards. Over time, the focus shifted to operational efficiency-automating validations, implementing auto-resolution for low-impact alerts, and developing a flexible validation framework adaptable to customer service levels. Product-market fit solidified around organizations with complex utility footprints (100+ accounts/meters), where AP teams (prioritizing timely payments) and energy managers (analyzing portfolio efficiency) became primary user personas. - **Key drivers for development**: - Disrupting broker-dominated markets (70%+ penetration) by building direct customer relationships through data utility. - Addressing operational bottlenecks: Manual validation resolution was resource-intensive, leading to automation initiatives that reduced human intervention. - Emergence of bill pay as a growth catalyst: Partnering with PayClearly enabled payment distribution while outsourcing regulatory (AML/KYC) compliance, scaling payments under management to ~$750M. - -- ## Core Functionality and User Workflows The platform processes utility bills (electricity, gas, water, telecom) through integrated workflows. Key modules include bill acquisition, data extraction, validation, analytics, and payment orchestration. ### Bill Processing Pipeline - **Acquisition**: Bills enter via physical mail (scanned in Idaho Falls), SFTP, email, or portal scraping (manual or via BDE automation). - **Data Extraction**: - Legacy: Template-based OCR with manual data entry. - Current: AI/LLM-driven (OpenAI + Microsoft Document Intelligence) extracts data into JSON, categorizing line items (e.g., delivery vs. supply charges). Challenges persist in normalizing diverse utility terminologies. - **Validation**: 70+ automated checks (e.g., usage anomalies, load factor imbalances) flagged for review. Service-level configurations allow custom rule sets. ### User Engagement and Features - **Persona-Specific Use Cases**: - AP Teams: Prioritize payment timeliness, emergency payment triggers, and reconciliation visibility. - Energy Managers: Leverage portfolio analytics (e.g., usage benchmarking across identical facilities). - Sustainability Teams: Extract data for carbon reporting integrations. - **Analytics**: Normalized daily usage/cost metrics accommodate irregular billing cycles. Weather data (HDD/CDD) enables usage correlation, though weather normalization remains unimplemented. - **Payment Integration**: PayClearly handles transactions via FBO accounts, with payment status/timing visible in UBM. - -- ## Technical and Operational Challenges Operational hurdles stem from fragmented processes and scaling pressures. - **Data Services Dependencies**: - Bill ingestion and initial data extraction occur outside UBM, creating visibility gaps (e.g., unable to track real-time download status). - AI data extraction deployed in June 2024 increased error rates due to unvetted outputs, bypassing human audit. - **Invoice Processing Delays**: High volumes and AI inaccuracies caused backlogs, risking late fees ($600+ per incident). Contingency: Prioritizing costlier but faster bill retrieval methods (e.g., UtilityAPI) over BDE. - **Dashboard Limitations**: The landing page lacks real-time metrics (e.g., bills processed, errors resolved), masking operational effort. - -- ## Competitive Landscape and Market Position UBM targets mid-market clients underserved by legacy solutions. - **Competitor Weaknesses**: - **Cass**: Outdated interface ("mainframe-like"), rumored exit from the space. - **NG**: Aging technology, reportedly for sale, reliant on float revenue. - **UBM Differentiation**: Modern UI, transparent payment tracking, and integrated analytics. Floats revenue potential is untapped (estimated 3-5 day float at $1B+ under management). - **Niche Players**: Conservice (property management focus), Arcadia, and UtilityAPI offer specialized capabilities but at higher costs. - -- ## Strategic Initiatives and Roadmap Near-term priorities aim to enhance automation, reduce dependencies, and deepen analytics. - **Q2-Q3 2024 Focus Areas**: - **Late Bill Management**: Integrating Power BI-driven late bill tracking directly into UBM. - **Payment Transparency**: Adding timestamps for payment milestones (e.g., "funds requested") to audit workflows. - **History Load Optimization**: Migrating historical data ingestion from Data Services to UBM. - **Auth0 Integration**: Implementing single sign-on (SSO) for streamlined access. - **Technical Debt**: - Deprecating Data Services components by enabling direct bill/uploads. - Building feedback loops to refine AI categorization prompts (currently firefighting inaccuracies). - **Unaddressed Opportunities**: - Drag-and-drop bill uploads (de-prioritized since 2019). - Customizable widgets for dashboards (in development post-Mitsubishi feedback). - Weather-normalized usage reporting. - -- ## Operational Metrics and Documentation Performance tracking and knowledge management are centralized but require reactivation. - **Key Metrics**: Daily monitoring of emergency payments, late fees incurred, SLA adherence (2-day prepay, 5-day postpay), and payment errors (e.g., missing accounting codes). Historically, these were reviewed daily but lapsed post-June 2024. - **Documentation Hubs**: - **Confluence**: Technical specs, validation rules, DevOps processes. - **Teams/SharePoint**: Customer-facing SOPs, SOC1 compliance docs, strategic plans (e.g., annual goals). - **Toolchain**: Jira (scrum), Heap (analytics), Power BI (internal reporting), Delighted (feedback).
Portfolio Strategy Review
## Common Drive Property Vacant suites approaching three years require aggressive pricing adjustments to attract tenants. After unsuccessful leasing at $1,700/month ($23/sq ft/year), a reduction to $1,400/month ($18/sq ft/year) is proposed, aligning with the dry cleaner tenant's rate. This adjustment is critical due to the property's rural location and declining appeal post-COVID, impacting its marketability. Disposal urgency is heightened as an investor expects resolution by mid-year, which is now delayed. - **Rent Reduction Strategy:** Proposed cut to $1,400/month based on lack of interest at higher rates and the dry cleaner's current rent serving as the only reliable market indicator. - **Dollar General Expansion Rejected:** An offer from Dollar General to expand into vacant space without increasing rent and requiring significant landlord-funded improvements ($50k) was deemed financially unviable, despite offering a 10-year lease extension. - **Disposal Priority:** The property is deemed non-core ("vanished"), and filling vacancies, even at lower rents, is prioritized to facilitate a sale, overriding concerns about a 10-year lease term hindering disposition. ## Fairfield Property Renewal strategy focuses on the major hospital tenant (88% occupancy) whose lease expires in April next year. The existing CPI-based renewal option would trigger a 32% rent increase, deemed unsustainable. Negotiations aim for a more moderate 10-15% increase in exchange for a 10-year lease term, positioning the property optimally for sale with long-term stabilized income. - **Hospital Negotiation Leverage:** The hospital has limited alternative options locally and consistently seeks space, strengthening the landlord's position to secure favorable terms. - **Proposed Rent Increase:** Targeting $16.50/sq ft (a 23% increase from the current blended rate of $13.41/sq ft), with flexibility for negotiation, based on comparable medical office rents of $16-$18/sq ft. - **Sale Timing:** Securing a long-term lease with the hospital tenant is identified as the key step to maximize property value before a planned sale. ## Alpharetta (Guidepost Global / Montessori School) The property is part of High Ground Education's bankruptcy. Secured creditors formed Guidepost Global to take over performing leases, requiring a 6% rent reduction for inclusion. The assignment hearing is scheduled for late August but faces potential delays. The existing amendment structure, including deferred rent repayments starting September 2026, remains intact under the new entity. - **Bankruptcy Assignment:** Court approval is pending to formally assign the lease from the bankrupt entity (High Ground Education) to the new entity (Guidepost Global), with a 6% rent reduction agreed upon as a condition. - **Deferred Rent Structure:** The pre-bankruptcy agreement deferring rent payments until September 2026, with repayment scheduled through August 2027, continues under Guidepost Global. - **Future Sale Considerations:** Divergent views exist on optimal sale timing: either soon after assignment to capitalize on perceived stability, or later (post-2027) to capture deferred rent repayments and scheduled annual increases, despite concerns about Guidepost Global's ability to absorb the deferred rent burden. ## Tyler, Texas Property A vehicle accident severely damaged the vape shop tenant's space after hours, fortunately causing no injuries. Structural integrity is confirmed, and insurance will cover repairs and lost income (subject to deductible). Concurrently, a state road widening project requires condemnation of approximately 300 sq ft, impacting the property's pylon sign, with an initial $76,000 offer made; an eminent domain attorney is engaged to maximize compensation. - **Accident Recovery:** Insurance covers repairs (interior damage only, no structural issues) and lost rent; the tenant is committed to reopening as the location was gaining traction. - **Road Widening Impact:** Eminent domain taking affects the signage area; the gas station tenant anticipates reduced traffic access and business disruption once the concrete median barrier is installed (construction starts ~2026). - **Sale Strategy:** Targeting Q4 2026 for disposition, leveraging a scheduled 10% rent increase for the Whataburger anchor tenant in July 2026 and remaining lease term. ## Carrollton Property Loan modification completed: balance reduced to $4.2M (from $5.2M), maturity extended to April 2031, with interest-only payments at 5% until April 2028, then 6%. Securing $3.2M in replacement capital is critical, with only $500k committed so far; efforts continue to attract the remaining funds. - **Refinancing Terms:** Successful modification provides extended loan term and favorable near-term interest rates, improving cash flow stability. - **Capital Raise Challenge:** Attracting the remaining $2.7M in replacement capital is urgent, with the property's strong ~10% cash yield highlighted as a key selling point to potential investors. - **Timeline Pressure:** Targeting an end-of-October close for the capital raise, acknowledging summer slowdowns but emphasizing renewed efforts post-summer. ## North Dixie Plaza (At Home) At Home's bankruptcy filing necessitates lease renegotiation. Their initial offer proposed slashing rent to $509k (from ~$802k) for the full 91,000 sq ft space through January 2029. Counteroffers included space reduction or a higher rent (~$670k). Significant debate centered on accepting reduced rent versus rejecting the lease and retenanting, considering lender covenants and investor distributions. - **Renegotiation Stance:** At Home's low offer ($509k) is countered with proposals around $670k, potentially including a percentage rent component based on sales exceeding $5.2M. - **Strategic Dilemma:** A major debate occurred on whether to accept reduced rent (risking future defaults and value destruction) or terminate the lease and retenant subdivided space (facing significant vacancy periods, capital needs, but potential long-term value creation). - **Lender & Investor Impact:** Accepting reduced rent risks breaching debt covenants and slashing investor distributions (~5% vs current 9%). Retenanting could require investor capital calls during vacancy but aims for higher future value. The lender is aware but not currently intervening. - **Action Plan:** Proceed with a counteroffer (~$670k) requiring At Home to downsize in 2029 with advance notice if renewing, while simultaneously analyzing retenanting feasibility and preparing both scenarios for investor consultation if At Home rejects the offer.
Email Review
## Pending Pre-employment Checks The drug screening remains the only outstanding requirement for onboarding clearance. - **Drug screen status**: Taken on the 16th and still pending results, with clearance expected within 1-2 days. Daily monitoring is being conducted, and immediate notification will follow upon clearance. - **Background check status**: All components except the drug screen are already completed and cleared. ## Identity Verification Document authentication for the permanent resident card was successfully completed during the meeting. - Verification required: - Displaying the front and back of the card to the camera. - Flexing the card to confirm material authenticity. ## Start Date Confirmation The target start date of August 25 is contingent on drug screen clearance. - Onboarding will proceed as planned once the drug screen clears, with no other pending requirements. ## First-Day Onboarding Process Critical onboarding materials will be delivered via email on the afternoon of the first workday. - **Email contents include**: - Direct deposit enrollment: For setting up payroll transfers. - Tax form updates: To adjust withholding preferences if needed. - Timecard system access: Instructions for weekly hour submissions. - Benefits enrollment: Health insurance and other benefit options. ## Post-Meeting Follow-Up Immediate next steps focus on finalizing clearance and communication. - **Drug screen tracking**: Results will be monitored twice daily (morning/afternoon) for expedited updates. - **Notification protocol**: The recruiter (Jordan) will be informed of current status, and a direct text message will confirm drug screen clearance. - **Contingency planning**: All other onboarding prerequisites are confirmed complete, leaving only the drug screen as the final hurdle.
Contract Hire Planning
## Summary This was a pre-onboarding discussion between a hiring manager and a new contractor who has accepted a contract-to-hire position. The conversation focused on finalizing start date logistics and addressing questions about the contract terms. The new hire expressed interest in potentially converting to full-time employment earlier than the standard six-month timeline if performance expectations are met and all parties are aligned. The hiring manager confirmed that the company would be open to early conversion as long as the recruiting agency (Kforce) agrees to adjust the terms. The discussion covered practical next steps including completing remaining paperwork with Kforce and determining an official start date. Two potential start dates were discussed: September 1st or August 25th, with the final decision dependent on completing the contracting process. The new hire mentioned having already given their current employer advance notice beyond the typical two weeks, which provides flexibility in timing. Both parties expressed enthusiasm about the upcoming collaboration, particularly noting the advantage of having the new team member located in the DC area for quarterly in-person collaboration opportunities. ## Wins - **Contract acceptance confirmed**: The new hire is ready to formally accept the offer and begin the Kforce paperwork process - **Early conversion possibility**: Company leadership confirmed openness to converting the contractor to full-time before the standard six-month period if performance meets expectations - **Flexible start timing**: The new hire has already provided advance notice to their current employer, allowing for flexible start date scheduling - **Strong mutual enthusiasm**: Both parties expressed excitement about the collaboration, with particular appreciation for the DC location enabling quarterly in-person meetings - **Clear communication channels**: Established process for end-of-week updates on start date confirmation through either the contractor or Kforce ## Issues - **Processing delays**: The onboarding process experienced delays of approximately a month and a half, though most items have now been addressed - **Outstanding paperwork**: Some documentation still needs to be completed with Kforce before finalizing the start date - **Holiday timing considerations**: September 1st start date coincides with Labor Day, which may impact the first week of work ## Commitments - **New hire**: Will formally accept the offer and complete remaining Kforce paperwork by connecting with their team tomorrow - **New hire**: Will provide end-of-day Friday update on confirmed start date (either August 25th or alternative timing) - **Hiring manager**: Will ensure any company-side blockers are addressed promptly once identified - **Hiring manager**: Will coordinate internal planning once start date is confirmed - **Both parties**: Will maintain communication through the week to ensure smooth process completion
Discuss Onboarding tracking
Product Management Alignment
## Background Faisal brings approximately 7-8 years of product management experience, starting his career at 10 Pros, a global software consultancy where he worked with diverse clients across healthcare, fintech, and entertainment sectors. His most significant role was as the founding product manager at Finmark, a Y Combinator startup, where he was the first team member to join the founding team. At Finmark, he built a financial planning and analysis tool for startups and SMBs, successfully scaling from zero to thousands of customers within two and a half years, which ultimately led to the company's acquisition by Bill.com. Currently, he serves as a senior product manager at Bill.com, continuing to manage the Finmark product while overseeing feature integrations from Finmark into Bill's broader platform. ## Work Experience Faisal's recent work has been particularly focused on AI and data interaction solutions. During his time at Finmark, he developed an AI chatbot that allowed users to interact with their financial data - described as a "basic rudimentary version" of what Zenlytic is building. While this project never reached its full potential due to acquisition priorities and the 12-18 month integration timeline, Faisal later worked on the foundational infrastructure that now powers Bill.com's AI chatbot across all their product offerings. His approach to product management is highly tactical and hands-on. He maintains daily meetings with co-founders, engineering leads, and technical architects to facilitate progress. His workflow involves challenging leadership decisions, working closely with design on prototyping, writing requirements that are "loose enough for design to do what they think is best for user experience" but "rigid enough that they actually know what they need to do." He emphasizes pre-development alignment, ensuring design and engineering sign-off before moving to development, though he's comfortable making quick decisions when time-sensitive issues arise during development. Faisal has technical depth with a systems engineering background, basic coding experience, and SQL proficiency. He describes himself as knowing "enough to be dangerous" but sufficient to have meaningful technical conversations and understand complexity. ## Candidate Expectations ### Working Style Faisal expects to work as a liaison between all teams and departments, serving as the "glue" that connects engineering, design, sales, customer experience, and leadership. He's comfortable working at all levels of the organization and expects to have frequent touchpoints - potentially daily meetings with key stakeholders and at least weekly meetings with engineering leads. He strongly values both synchronous and asynchronous communication, setting up systems that allow team members to provide feedback when convenient while maintaining open lines for urgent matters. ### Role Scope and Responsibilities He expects to take ownership of the complete product development pipeline, from gathering and filtering feedback to prioritization and execution. Specifically, he anticipates managing both internal feedback systems (from engineering, executives, sales, marketing, customer success) and external customer feedback. He's prepared to gradually take over customer conversations from the CTO Paul, aiming to establish direct customer relationships within the first 30 days. Notably, Faisal expressed strong interest in taking on UX responsibilities, which he found "very interesting" despite acknowledging it's uncommon for product managers to be so involved in UX design. He's prepared to work closely with design to translate complex technical requirements into usable experiences. ### Technical Engagement He expects to be deeply involved in technical discussions and decision-making, comfortable with the complexity of data modeling, SQL joins, and technical product features. He's prepared to bridge the gap between engineering's technical capabilities and design's user experience focus, particularly for complex features that require domain expertise to properly convey to design teams. ## Questions/Concerns Faisal raised several strategic questions about the company's current processes and structure. He inquired about the existing engineering interaction processes and roadmap building procedures, identifying that there appears to be limited structure currently in place. He specifically asked about the biggest internal process challenges, learning that prioritization and focus are major issues, with the team frequently pivoting based on whoever is "loudest" rather than systematic feedback analysis. He was particularly interested in understanding Paul's current prioritization methodology, learning that it's largely instinct-based from customer interactions rather than a formal process. Faisal also inquired about the team's openness to process changes, asking whether they would prefer dramatic overhauls or incremental improvements, and was reassured that the team is very open to better processes and would likely accept new methodologies without significant pushback. His questions demonstrated a strategic mindset focused on understanding existing gaps and how to systematically address them, particularly around feedback management, prioritization frameworks, and team communication structures.
UBM Product Manager - Interview
## Background The candidate brings eight years of product management experience with a strong foundation in fintech. They began their career as a consultant at 10Pearls, a global software consultancy, where they worked with over ten clients across entertainment, fintech, and healthcare sectors, ranging from early-stage startups to Fortune 500 companies. This diverse exposure provided them with broad experience in both B2B and B2C products, though primarily B2B focused. Following their consulting experience, they transitioned to a founding product manager role at Finmark, a Y Combinator startup developing financial planning and analysis tools for startups and SMBs. During their two and a half years there, they successfully grew the product from zero customers to thousands of users, ultimately leading to the company's acquisition by Bill.com. Currently, they serve as a senior product manager at Bill.com, continuing to manage the Finmark product while working on integration projects and leading various initiatives, including the development of an AI chatbot for financial data engagement. ## Work Experience The candidate's recent work has been heavily concentrated in the fintech space for approximately six years. At Finmark, they operated as the sole product manager, taking full ownership of product development from conception through acquisition. Their approach emphasized rapid iteration and user feedback, utilizing a customized Kanban methodology rather than traditional Scrum to enable continuous delivery without being constrained by bi-weekly release cycles. Their work style focuses on three core principles: continuous delivery to get value to users as quickly as possible, continuous improvement mindset, and prioritizing user feedback through direct user testing and engagement. They demonstrate comfort working with distributed teams, currently collaborating with engineers across India and Pakistan, designers in the UK, and onshore development teams across multiple US time zones. The candidate shows particular strength in tackling ambiguous problems and enjoys the collaborative aspects of product management, whether working directly with customers to extract requirements or partnering with engineering teams to determine technical feasibility. They've successfully navigated the full product lifecycle from early-stage startup constraints to post-acquisition integration challenges. ## Candidate Expectations ### **Working Style** The candidate expresses flexibility regarding team composition and geographic distribution, showing no strong preference between working primarily with offshore versus onshore development teams. They're comfortable with the collaborative nature of product management and enjoy switching between customer-facing work and technical problem-solving with engineering teams. ### **Role Preferences** They demonstrate openness to both available positions, understanding that one role would be more client-facing (with offshore team) while the other would be more development-focused (with onshore team). The candidate appreciates that both roles involve similar core responsibilities, with the main difference being team location and the balance between client interaction and development team collaboration. ### **Career Interests** The candidate is particularly drawn to fintech and finds the concept of providing financial and banking features as a service compelling. They're interested in roles that offer variety and the opportunity to solve ambiguous problems, showing enthusiasm for positions where they can make meaningful contributions and have their expertise valued. ## Questions/Concerns The candidate raised several thoughtful questions about the role and company structure: **Success Metrics**: They wanted to understand what success would look like in the role after 6-12 months, showing interest in clear performance expectations and measurable outcomes. **Role Differentiation**: They sought clarification on the differences between the two available positions and the unique challenges each might present, demonstrating thorough consideration of their options. **Technical Architecture**: They inquired about the balance between custom code development versus modular platform features, showing interest in understanding the technical approach and decision-making process for client requests. **Client Relationship Management**: They asked about the nature of client relationships and how different clients might require different management approaches, indicating awareness of the consultative aspects of the role. **Team Structure**: They wanted to understand how the dedicated client teams integrate with other product teams and how decisions are made regarding platform-wide versus client-specific features. **Geographic Considerations**: They inquired about client and team locations, showing practical consideration for time zone management and communication logistics. The candidate demonstrated strong communication skills when discussing how they handle difficult conversations with clients, emphasizing transparency, clarity, and always providing alternative solutions rather than simply saying no to requests.
AI Analytics Strategy Meeting
## Background The candidate has a strong background in AI and data analytics, with graduate-level education in symbolic domains. They demonstrate deep technical knowledge of SQL, data visualization, and AI capabilities. Their expertise appears to be in understanding how to balance powerful AI capabilities with user trust and governance, particularly in data analysis contexts. The candidate has experience working with enterprise customers and understands the challenges of implementing AI solutions in sensitive data environments. ## Work Experience The candidate's recent work experience has focused on developing AI-powered data analysis solutions. They have been heavily involved in creating what they call a "Clarity engine" - a system that can generate SQL queries while maintaining user trust through transparent UI components. This project has been their primary focus for about a year and is now approaching general availability. Their day-to-day responsibilities include extensive customer engagement, having conducted 25-30 calls with customers over the past three months, speaking with 40-50 individuals to understand adoption barriers, desired data sources, and trust requirements. They also participate in sales calls, particularly for architecture discussions with prospects, to understand use cases and identify potential concerns with the product. The candidate has demonstrated leadership in product strategy, making significant decisions about the technical direction of their product based on their assessment of AI model capabilities. They recognized late last year that AI models were rapidly improving in symbolic domains like coding, which led to a strategic pivot in their product approach. ## Candidate Expectations ### Working Environment The candidate values a team environment with smart, driven individuals who work hard but maintain low egos. They appreciate a culture where people focus on finding the right answer rather than being right, allowing the team to reach consensus quickly without ego conflicts. They specifically mentioned that their current workplace is "not a chill place to work at all," indicating comfort with a high-intensity work environment. ### Technical Challenges The candidate is drawn to complex technical problems, particularly the challenge of balancing AI capabilities with user trust in data analysis. They find satisfaction in tackling difficult problems that others haven't solved, such as parsing SQL abstract syntax trees to create trustworthy UI components that help users understand AI-generated queries. ### Career Goals The candidate appears interested in roles where they can influence product strategy and work on cutting-edge AI applications. They show enthusiasm for being at the forefront of AI innovation, particularly in applying large language models to data analysis problems. They seem to value opportunities to make strategic bets on emerging technologies and shape product direction. ### Role Expectations For a product management role, the candidate expects to execute against high-level roadmaps, balance competing priorities (fixing issues, paying down tech debt, implementing features), and manage feedback from multiple channels (direct customer feedback, sales, customer success). They understand the need to balance enterprise customer needs with SMB requirements and to maintain close alignment with leadership. ## Questions/Concerns The candidate asked several questions about the company's technical challenges, competitive advantages, and growth strategy: - What are the biggest technical challenges the team is currently facing? - What is the company's moat or competitive advantage in the market? - How is the company keeping up with rapid changes in the AI landscape? - What will be the biggest growth unlock in the next 12 months? - What challenges might arise in engineering/product collaboration? - How is success defined for the product management role? - How does the company currently engage with customers? - What excites current team members about working at the company? The candidate expressed particular interest in understanding how the company realized its competitive differentiation and how quickly it acted on that insight, suggesting they value strategic thinking and decisive action in response to market opportunities.
Software Team Integration Discussion
## Background The candidate has extensive experience as a Product Manager at Bill.com, where they joined through the acquisition of Finmark, a financial planning and analysis startup. They have been responsible for building and managing Finmark as a product within the larger Bill.com organization. The candidate has successfully navigated the transition from a startup environment to a larger corporate structure, demonstrating adaptability and strategic thinking in product management roles. Their background includes hands-on experience with product development, team management, and cross-functional collaboration in both startup and enterprise environments. ## Work Experience The candidate currently operates in a hybrid development environment, transitioning from Kanban to two-week sprints to align with Bill.com's organizational standards. They manage quarterly, monthly, and weekly planning cycles, with responsibility for ensuring timely delivery of product roadmaps. Their team composition includes front-end and back-end engineers, QA engineers, respective engineering managers, a technical architect, and a designer. They work closely with the technical architect on architectural decisions and collaborate with engineering managers through weekly meetings. The candidate has demonstrated strong problem-solving skills during the integration process at Bill.com. When faced with resource constraints from the data team that threatened project timelines, they escalated the issue appropriately and implemented a creative solution by temporarily assigning one of their engineers to work with the data team. This experience led to establishing a new standard process across the organization for identifying and planning external dependencies during quarterly planning. They have experience with cloud-native architecture and microservices, though they acknowledge their technical knowledge has limitations as a PM. The candidate manages dotted-line relationships with engineering teams and runs roadmap planning meetings with multiple stakeholders three times per week, demonstrating strong leadership and coordination capabilities. ## Candidate Expectations ### **Working Style** The candidate expects to work in an environment that supports both autonomous decision-making and collaborative problem-solving. They value regular communication cadences and structured ceremonies like retros, which they've found extremely helpful in their current role. They appear comfortable with early morning meetings to accommodate offshore teams and expects flexibility in working arrangements. ### **Team Dynamics** They expect to work closely with technical architects and engineering managers, maintaining the collaborative relationship style they've established in their current role. The candidate values having autonomy to make product decisions while maintaining open communication channels with engineering leadership for technical guidance and sign-offs. ### **Professional Growth** The candidate seems interested in continuing to work in environments where they can influence organizational processes and standards, as evidenced by their success in establishing new dependency planning processes at Bill.com. They appear to thrive in situations that require cross-functional coordination and process improvement. ## Questions/Concerns The candidate expressed several key concerns and questions about the role: **Feedback Management Process**: They wanted to understand how Constellation gathers and manages both internal and external feedback, showing interest in customer-centric product development approaches. **Offshore Team Collaboration**: They specifically asked about working with the Romanian development team, inquiring about potential challenges with time zones and communication. This suggests they want to ensure they can effectively manage international team dynamics. **Organizational Structure**: The candidate sought to understand the relationship between Navigator and the broader Constellation organization, indicating their interest in understanding how the product fits within the larger corporate structure and how cross-team collaboration works. **Technical Architecture**: They showed interest in understanding the current platform architecture, asking about cloud-native versus server-native approaches and API composition, demonstrating their desire to understand the technical foundation they'd be working with. The candidate expressed appreciation for the technical focus of the conversation and indicated they would follow up with additional questions via email, showing continued engagement in the process.
AI Product Strategy Meeting
## Background The candidate has a strong product management background with experience spanning consulting, early-stage startups, and post-acquisition environments. He began his career in project management before transitioning to product management and consulting at 10Pearls, a global software consultancy, where he worked with diverse clients ranging from early-stage startups to Fortune 500 companies across healthcare, fintech, and entertainment sectors. This experience provided broad exposure to both B2B and B2C products, with a primary focus on B2B solutions. His most significant role was as the founding product manager at Finmark, a Y Combinator startup focused on financial planning and analysis tools. He was the sole product manager throughout the company's growth from zero to thousands of startup and SMB customers, ultimately leading to the company's acquisition by Bill.com in approximately two and a half years. Currently, he serves as a senior product manager at Bill.com, where he continues to manage the Finmark product while integrating its features into Bill's unified platform. ## Work Experience The candidate's recent work has been highly tactical and hands-on, particularly during his tenure at Finmark where he built all existing features from the ground up. His approach demonstrates strong user-centric thinking and data-driven decision making. A notable example was his comprehensive overhaul of Finmark's onboarding process, which was experiencing a 50% completion rate problem. He systematically analyzed the entire onboarding flow, categorizing elements into three buckets: absolute minimum requirements, nice-to-haves, and educational content that was inappropriately placed. His solution involved streamlining the core onboarding to essential elements with clear progress indicators, marking optional items as skippable and moving them to the end, and relocating educational content to contextual tooltips within specific product pages. Most significantly, he addressed the technical bottleneck of synchronous data integration that could take hours by implementing an asynchronous background process with prioritized data pulling (totals first, then details) and clear user communication about sync status. This resulted in a 23% increase in onboarding completion rates and reduced time-to-value. His most technically sophisticated project involved building an AI chatbot for financial data interaction using GPT-3.5. The system allowed users to ask natural language questions about their financial data that weren't addressable through standard dashboard charts, such as "What are my top five vendors this past quarter?" The implementation involved complex data anonymization, local LLM integration for query processing, and a feedback system to address accuracy and hallucination issues. He designed the UX as a persistent chat interface that could overlay or expand alongside the main product interface, enabling users to interact with both the chatbot and the product simultaneously. ## Candidate Expectations ### Working Style The candidate emphasizes collaborative decision-making and transparency in product development processes. He prefers involving engineering managers and cross-functional teams (sales, marketing, customer support) in roadmap prioritization rather than working in isolation. He values having 2-4 weeks of fully groomed work planned ahead while maintaining quarterly planning cycles, balancing structure with agility for necessary pivots. ### Prioritization Philosophy He follows a three-pillar framework for prioritization: product strategy alignment with North Star metrics, feedback management (distinguishing signals from noise), and development appetite assessment. He's skeptical of rigid prioritization frameworks, preferring product intuition combined with collaborative input. His approach involves asking critical questions about time investment versus impact, such as whether a six-week development effort is truly worth that investment or if a two-week version could meet customer needs. ### Career Goals The candidate is interested in scaling challenges and process standardization, viewing these as positive indicators of company growth. He expects to establish foundational processes and team collaboration frameworks within his first 90 days, while building comprehensive product understanding and roadmap planning capabilities. ### Feedback and Customer Interaction He emphasizes the importance of systematic feedback management, including both synchronous customer communication and asynchronous feedback collection systems. He plans to work closely with early adopters and vocal customers while establishing structured internal feedback processes. ## Questions/Concerns The candidate expressed strong interest in understanding the company's early journey, particularly what the founders accomplished during the first two years before language models became viable for their AI data analyst vision. He was curious about the technical evolution from the initial heuristic-based approaches to the current LLM-powered solution. He showed interest in learning more about the team structure, current processes, and specific areas where standardization might be needed as the company scales. The candidate also wanted to understand the current customer feedback mechanisms and early adopter engagement strategies. While he didn't raise specific concerns during the interview, he indicated having additional questions that he had already discussed with other team members (Anshu and Ryan) and expressed willingness to continue the conversation in a follow-up session to ensure all his questions were addressed.
Product Management Insights
## Summary ### Team Introductions and Company Overview The meeting began with introductions from the Constellation Navigator team members. **Tara Goring** serves as senior manager of customer success and operations, bringing over 20 years of commodity industry experience and 10 years with Constellation. She oversees multiple product lines including UBM, carbon accounting, and advisory rebates. **Maurice Goodman** is the product owner for the carbon economy platform, having joined Constellation a year ago with extensive background in fintech, blockchain, and AI applications across various industries from CME Group to startups. **Ryan** manages the UBM platform and has been with the company for two and a half years, originally helping establish the bill pay process and formalizing operational procedures. ### Candidate Background and Experience **Faisal** presented his career progression from consulting to startup leadership to Fortune 500 product management. He began in project management consulting, working with clients across healthcare, fintech, and entertainment sectors of varying sizes. His pivotal role came as founding product manager at Finmark, a Y Combinator startup focused on financial planning and analysis (FP&A) tools for startups and SMBs. During his tenure, he scaled the product from zero to thousands of customers over two and a half years, ultimately leading to acquisition by Bill.com where he currently serves as senior product manager. ### Product Development Philosophy and MVP Strategy The discussion revealed Faisal's approach to building products under extreme constraints. At Finmark, the team had an ambitious 60-day timeline to develop an MVP for Y Combinator demo day, requiring ruthless prioritization. The initial MVP focused on three core components: **expenses with basic forecasting capabilities**, **payroll as the largest expense category**, and **revenue tracking**. The product included only two key metrics initially - runway calculation for cash management and total expenses broken down by department. This disciplined approach to feature selection demonstrated the importance of distilling user needs to their absolute essence while ensuring the technical foundation wouldn't require immediate rebuilding. ### Scaling Challenges and Growth Phase Management As the product matured from startup to enterprise-level solution, Faisal outlined the transition from zero-to-one phase to growth phase management. The key insight involved maintaining startup-level ruthless prioritization while accommodating enterprise requirements like compliance, security, and accessibility. He advocates for **vertical slice development** rather than horizontal feature building, allowing for deployable code chunks that can be tested and potentially released incrementally. A critical strategy involves allocating 20-30% of team capacity to technical debt, security compliance, and unexpected requirements that emerge as the customer base matures. ### Team Structure and Release Management The engineering team structure evolved significantly post-acquisition, growing from Faisal as sole PM managing 15 engineers to a 4-PM team overseeing 32-33 engineers across multiple time zones. The team spans from India and Pakistan to the UK for design, with US-based engineers across all time zones, creating a 12-13 hour operational window. The team maintains startup-level release velocity using **Kanban methodology** rather than traditional sprints, averaging 18-20 production releases monthly with their highest month reaching 35 releases. Features may be deployed behind feature flags before user exposure, allowing for continuous integration while maintaining control over user experience. ### Data Integration and Technical Architecture A significant portion of the discussion focused on data management strategies as the product scaled. Initially, Finmark relied on third-party platforms like Plaid for data integration, with Faisal personally managing data transformation and system requirements alongside the technical architect. Post-acquisition, Bill.com recognized the strategic value of data ownership and created a dedicated "Bypass team" to migrate from external dependencies to internally-built integration tools. This transition addressed both cost concerns and the need for customized data processing that third-party solutions couldn't provide. ### Customer Feedback Management and Prioritization The team explored how to balance customer requests with product roadmap priorities. Faisal emphasized the PM's critical role in **distinguishing signals from noise** in customer feedback, ensuring that individual customer requests align with broader user needs rather than building one-off solutions. The approach involves evaluating whether specific issues affect multiple customers and how solutions can be built modularly to serve various customer segments. Bug prioritization follows a structured P0-P4 system, with P0 issues requiring immediate attention and P1s addressed within 1-2 days, while lower priority items have defined resolution timeframes. ### Quality Assurance and Release Process The quality assurance process involves four distinct validation layers: **developer testing in local environments**, **peer code review**, **dedicated QA testing in staging environments**, and **product manager acceptance testing**. This multi-layered approach supports the high-frequency release schedule while maintaining product quality. The original engineer remains responsible for the final merge to pre-production and subsequent production deployment, ensuring accountability throughout the development lifecycle. ### Communication and Transparency Practices Effective communication with customer success teams emerged as a crucial element for supporting rapid release cycles. Faisal maintains both synchronous and asynchronous communication channels, including regular meetings with CS leadership and accessible documentation in Jira. The roadmap remains transparent with clear Gantt charts and epic-level visibility, while individual stories include potential release note content to facilitate customer communication. This transparency becomes essential when releasing 18-20 updates monthly, ensuring customer-facing teams can effectively support users through product changes.
Finmark Product Evolution
## Background The candidate has a strong product management background with experience at both startups and consulting. They previously worked as a consultant at 10 Pearls before joining Finmark as the founding Product Manager. At Finmark, a financial planning and analysis (FP&A) startup, they were the first team member hired and worked closely with the CEO/co-founder Rami, who was a proven entrepreneur with two prior successful exits. The candidate played a crucial role in building Finmark from concept to a seven-figure ARR company with thousands of customers before its acquisition by Bill.com. Their experience spans the entire product lifecycle from 0-to-1 development through scaling and eventual acquisition. ## Work Experience The candidate's recent experience at Finmark demonstrates deep hands-on product management expertise in a high-growth startup environment. They joined when there was only an idea and no product, working under an ambitious 60-day MVP timeline for a complex financial product. They were responsible for customer research, working directly with Y Combinator batch companies as early adopters and conducting manual onboarding sessions to gather feedback. Their work involved close collaboration with engineering teams that scaled rapidly from 0 to 15 engineers in three months, eventually reaching 20 engineers. They managed the product evolution from serving pre-revenue startups to targeting SMBs, requiring significant product maturation and advanced feature development. A key achievement was leading a major technical refactoring initiative to introduce referenceable database architecture, custom formulas, and custom variables - essentially recreating Excel's flexibility in a more collaborative, error-resistant format. The candidate demonstrated strong stakeholder management skills, successfully selling the need for technical debt reduction and foundational improvements to non-technical leadership. They emphasized the importance of "slowing down to speed up" and came with solutions rather than just problems. Their approach involved gathering customer feedback to support technical decisions and working cross-functionally with engineering and design teams to build consensus before presenting to leadership. ## Candidate Expectations ### Working Style The candidate values direct customer interaction and believes in understanding problems firsthand. They appreciate environments where product managers can engage directly with users and gather feedback through multiple channels including demos, onboarding sessions, and support interactions. They seem comfortable with rapid scaling environments and cross-functional collaboration. ### Career Goals The candidate is interested in roles where they can have significant impact on product direction and customer experience. They're drawn to the opportunity to work on the "Experiences" team at Air Garage, focusing on core product UX and platform usability. They appear motivated by the challenge of improving user experiences for diverse demographics, including non-tech-savvy users. ### Compensation No specific compensation expectations were discussed during this interview. ### Specific Needs The candidate expressed interest in understanding team structure and growth plans, asking specifically about the product team's evolution and the three-pod engineering structure. They want to understand how customer feedback is gathered and seem to value organizations that prioritize user research and direct customer interaction. ## Questions/Concerns The candidate asked several thoughtful questions about Air Garage's operations and structure: - **Team Structure**: Inquired about the product team's future growth plans and the transition to three engineering pods (Experiences, Capabilities, and Hardware teams) - **Geographic Distribution**: Asked whether the engineering team is onshore, showing interest in team composition and collaboration dynamics - **Customer Feedback Mechanisms**: Wanted to understand both formal and informal ways the company gathers feedback from end users (drivers) and property owners - **Office Setup**: Asked about remote work policies and whether there's a physical office presence - **Customer Clusters**: Inquired about geographic distribution of customers and usage patterns across different US regions - **Customer Interaction**: Wanted to understand expectations around PM-customer interaction and whether product managers engage directly with existing and potential customers The candidate showed particular interest in the customer feedback processes and seemed pleased to learn about the interviewer's goal of visiting 100 properties out of 300 total, indicating alignment with hands-on customer research approaches.
Product Team & Candidate Overview
## Background The candidate has built a diverse career spanning software consulting, startup environments, and enterprise settings. Starting as a product manager at Ten Pearls, a global software consultancy, they gained exposure to clients ranging from early-stage startups to Fortune 500 companies across healthcare, fintech, and entertainment sectors. This consulting foundation led to a pivotal role as the founding product manager at Finmark, a Y Combinator startup, where they served as the sole product person from inception through acquisition by Bill.com over two and a half years. Currently working as a senior product manager at Bill.com, they continue managing the Finmark product while integrating features into Bill's broader offerings and contributing to various projects including AI chatbot development. ## Work Experience The candidate's recent experience demonstrates hands-on leadership in high-growth environments. At Finmark, they worked with a team of 15 engineers (two-thirds offshore, one-third onshore) to build the product from zero to thousands of startup and SMB customers, ultimately leading to acquisition. Their current role at Bill.com involves three key responsibilities: maintaining the live Finmark product with no sunset plans, integrating Finmark features into Bill's product suite (requiring significant integration work due to data housing complexities), and developing an AI chatbot foundation that evolved into Bill's primary customer engagement tool. Their approach to product management emphasizes three core principles: product strategy alignment with North Star metrics, feedback discernment to distinguish signals from noise while building for 70-80% of users rather than individual customers, and appetite assessment to determine if development estimates align with actual business value. They work closely with cross-functional teams including engineers, designers, customer success, sales, and marketing, while also engaging directly with customers for feedback and managing relationships with product directors, engineering managers, and technical architects. ## Candidate Expectations ### **Working Style** The candidate prefers a collaborative approach with direct customer engagement and values product intuition over rigid frameworks. They advocate for vertical slice delivery over horizontal development, emphasizing the importance of showing value incrementally while working toward end goals. They appreciate environments where they can influence prioritization decisions and work closely with engineering teams to find optimal solutions. ### **Career Goals** The candidate is seeking a role that offers growth opportunities in a dynamic, high-volume environment. They're attracted to the energy and collaborative nature of the organization, particularly the opportunity to work on scaling challenges and process optimization. They value being part of a team that's moving from hundreds to thousands of customers and appreciate the balance between startup agility and enterprise resources. ### **Timeline and Decision Process** The candidate is actively interviewing and hopes to make a decision by the end of the month or early next month. While they don't currently have offers in hand, they're committed to maintaining transparency throughout the process and not leaving the hiring team without proper communication. ## Questions/Concerns The candidate sought clarification on several organizational aspects, including team structure confirmation (35 total team members with 18 developers, mostly outsourced to Romania), reporting relationships (confirming direct report to the hiring manager), and key collaboration partners (engineering lead, architect, head of customer success, and operations team). They inquired about Chris Busby's role transition to the energy agreement marketplace division and expressed interest in understanding the biggest challenges and growth opportunities for the next 12 months, particularly around volume management and process optimization. The candidate also wanted to ensure proper scheduling for the next interview round with additional team members including other product owners and the head of customer success.
UBM Product Manager - Interview
## Background The candidate has built a solid product management career spanning 7-8 years across diverse environments. They began at 10Pearls, a global software consultancy, where they worked as a product manager serving 10+ clients ranging from early-stage startups to Fortune 500 companies across healthcare, fintech, and entertainment sectors. This consultancy experience provided broad exposure to different projects and business models. Following their consultancy role, they transitioned to become the founding product manager at Finmark, a Y Combinator startup focused on financial planning and analysis tools for startups and SMBs. As the first and sole product manager, they built the entire product from the ground up and helped guide the company from zero to acquisition by Bill.com over approximately 2.5 years. Post-acquisition, they've continued with the combined organization for nearly 6 years total, expanding their role and helping scale the product team. ## Work Experience During their tenure at 10Pearls, the candidate gained valuable cross-industry experience working with clients of varying sizes and maturity levels. This consultancy background provided them with adaptability and exposure to different business challenges, though they noted the inherent limitations of services work in terms of career growth and the constant context switching between projects. At Finmark, they operated as a true founding PM, taking complete ownership of product development while working with a team of approximately 15 engineers across front-end, back-end, and QA, plus one architect and one designer. They collaborated closely with sales, marketing, and customer success teams. The role required significant cross-functional leadership and the ability to wear multiple hats in a startup environment. Post-acquisition at Bill.com, their responsibilities evolved into three main areas: continuing as primary PM for the Finmark product, integrating Finmark features into Bill's unified platform, and working on special projects including an AI chatbot for financial data interaction. They've also been instrumental in scaling the PM team from one person to four, requiring them to establish processes and frameworks that could support a larger organization. However, they've encountered challenges with organizational politics and resource allocation that have limited their growth opportunities in recent months. ## Candidate Expectations ### **Working Style** The candidate strongly prefers working in small, close-knit teams where they can have direct communication without navigating multiple organizational layers. They miss the startup environment's agility and the ability to know exactly who to contact for any given issue. They're seeking an environment with minimal office politics, noting that when political navigation becomes 40-50% of their job responsibilities, it significantly impacts their job satisfaction. ### **Career Goals** They're specifically targeting startups that have achieved product-market fit and are facing growth-related challenges. The candidate wants to tackle the "good problems" that come with scaling a successful product rather than struggling with basic product-market fit issues. They're looking for opportunities that will reignite their learning and professional development, which has stagnated in their current role over the past 6-8 months. ### **Role Preferences** The candidate thrives on ambiguous, open-ended problems that require creative problem-solving and cross-functional collaboration. They particularly enjoy founding PM-level responsibilities where they can take full ownership of problems from initial discovery through implementation. They value environments where they can work proactively rather than reactively, developing strategic roadmaps rather than constantly responding to immediate needs. ### **Compensation** While not explicitly discussed in detail, the candidate appears to be in a position where they can be selective about opportunities, suggesting they're not under immediate financial pressure to make a move. ## Questions/Concerns The candidate raised several thoughtful questions about Air Garage's business model and growth strategy, specifically asking about the balance between sales-led and product-led growth approaches. They wanted to understand the company's biggest challenges and how product management could contribute to customer acquisition efforts beyond the current sales-heavy approach. They inquired about success metrics for the PM role over the next 6-12 months and sought clarity on team structure, particularly the engineering team size and how product management would be organized. The candidate also asked about competitive landscape and how Air Garage differentiates itself from traditional parking operators. There was interest in understanding how product could contribute to customer acquisition and retention, with the candidate offering strategic insights about driver network effects, predictive sales tools, and portfolio expansion opportunities for existing customers. They demonstrated genuine curiosity about the technical and strategic challenges Air Garage faces as it scales from its current product-market fit position toward rapid growth.
Reviewing Development work
## Background The candidate has extensive experience in product management with a particular focus on transitioning companies from sales-led to product-led growth models. They have worked at Finmar, where they successfully implemented self-serve onboarding capabilities and managed the transition from traditional sales processes to product-led growth over a 12-14 month period. Their background demonstrates hands-on experience with building demo environments, managing product adoption challenges, and working with smaller to mid-market organizations. The candidate shows strong analytical thinking and has experience working with data-related products, having previously worked with rudimentary products similar to the company's AI-powered business intelligence platform. ## Work Experience The candidate's recent experience at Finmar involved leading a critical business transformation from a sales-led motion to product-led growth. They implemented both semi-self-serve and full self-serve onboarding processes, recognizing the importance of reducing friction for potential customers. During this transition, they developed demo environments that allowed users to explore the product without requiring sales calls, understanding that customers need to experience the product before committing. They managed the associated costs and infrastructure challenges of providing demo access while balancing business needs. Their work involved addressing the fundamental challenge of product discoverability - ensuring potential customers could understand and evaluate the product without high-friction sales processes. The candidate demonstrated strategic thinking by implementing progressive onboarding approaches and showed practical understanding of the operational complexities involved in such transitions, including cost management for demo environments and user experience optimization. ## Candidate Expectations ### **Working Style** The candidate expects to work in a collaborative environment where they can contribute strong product opinions and strategic thinking. They want to be part of strategic decision-making rather than simply executing predetermined plans, showing interest in working alongside leadership to build product roadmaps. ### **Role Scope and Autonomy** They expect significant ownership over product strategy and roadmap development, wanting to collaborate with senior leadership as equals in strategic planning. The candidate is looking for opportunities to bring organizational maturity to product management processes and help establish better discipline around feature development cycles. ### **Career Growth** The candidate is interested in a role that offers potential for long-term leadership development, specifically the opportunity to build and lead a product team as the organization grows. They want to be positioned as a senior leader within the product organization. ### **Product Challenges** They expect to work on meaningful product challenges, particularly around user experience optimization, onboarding improvements, and the transition to self-serve models. The candidate shows interest in solving complex problems around product adoption and user engagement. ## Questions/Concerns The candidate raised several strategic concerns about the company's current approach. They questioned the lack of product accessibility, noting that potential customers cannot easily explore or demo the product without scheduling sales calls, which creates unnecessary friction. They identified this as a significant barrier to adoption and growth. They expressed concerns about the conversational interface limitations, recognizing that while chat interfaces are improvements over previous BI tools, they're not the ultimate solution and present challenges like the "tyranny of the empty page" where users don't know what questions to ask. The candidate also inquired about data integration challenges, specifically questioning how the platform handles diverse data sources beyond SQL databases, including tools like Mixpanel and other analytics platforms that provide crucial context for comprehensive business intelligence. They asked about the biggest opportunities and challenges for the next 6-12 months, wanting to understand the strategic priorities and potential obstacles they would need to address in the role. The candidate also sought clarity on success metrics and expectations for a PM role over the next year, demonstrating their focus on measurable outcomes and clear performance expectations.
مناقشة الأجهزة والهاتف
## الملخص ### محتوى الاجتماع غير واضح تم تسجيل مقطع صوتي قصير من الاجتماع، لكن المحتوى المنقول يحتوي على نص غير مفهوم أو مشوه، مما يجعل من الصعب استخراج معلومات مفيدة أو موضوعات محددة. - **جودة التسجيل**: يبدو أن هناك مشاكل تقنية في التسجيل أو النقل أدت إلى تشويه المحتوى الصوتي - **المدة الزمنية**: الاجتماع المسجل كان قصيراً جداً (حوالي دقيقة واحدة فقط) - **عدم وضوح المحتوى**: النص المنقول يحتوي على كلمات مختلطة وغير مترابطة تجعل فهم السياق مستحيلاً ### التوصيات للمستقبل نظراً لعدم وضوح محتوى هذا الاجتماع، يُنصح بمراجعة إعدادات التسجيل والتأكد من جودة الصوت قبل الاجتماعات المستقبلية. - **تحسين جودة التسجيل**: التأكد من وضوح الصوت وعدم وجود تداخل أو تشويش - **مراجعة المعدات**: فحص أجهزة التسجيل والميكروفونات المستخدمة - **اختبار مسبق**: إجراء اختبار قصير قبل بداية الاجتماع للتأكد من جودة التسجيل
AI Chatbot Product Management
## Background The candidate is currently a Senior Product Manager at Bill.com, having joined through the acquisition of Finmark where he served as the founding product manager. He was the very first team member to join after the co-founders at Finmark, coming through 10 Pearls as a contractor initially. At Finmark, he built the product from zero to acquisition over two and a half years, serving as the sole PM throughout this period. Finmark was a financial planning and analysis (FP&A) tool for startups and SMBs, and under his leadership, the company grew from zero to thousands of customers before being acquired by Bill.com. The team scaled from just the founders to 32-33 employees at acquisition, with engineering scaling to 15 people within the first two months. ## Work Experience The candidate's recent work experience demonstrates strong hands-on product management across multiple domains. At Finmark, he was responsible for building almost all product features himself, working closely with a rapidly scaling engineering team that grew from 3 co-founders to 15 engineers in two months. He managed the entire product development lifecycle, from competitive research and customer outreach to feature development and customer acquisition. Post-acquisition at Bill.com, his responsibilities expanded to include maintaining the original Finmark product, integrating core Finmark features into Bill's unified platform, and leading the development of an AI chatbot for financial data engagement. The AI chatbot project began as a hackathon initiative at Finmark and evolved into a foundational platform at Bill.com that works across all of Bill's product offerings. He worked with a team of four other PMs, all reporting to a product director, and collaborated extensively with data teams and technical architects. His work involved complex technical challenges including data anonymization, local LLM implementation for query processing, and integration with OpenAI's API while managing cost constraints and privacy concerns. ## Candidate Expectations ### Working Style The candidate strongly prefers smaller team environments, having experienced the contrast between Finmark's 32-person team and Bill.com's 3,000+ employee organization. He values direct communication channels and the ability to move quickly without bureaucratic overhead. He's comfortable with the "full stack PM" role that comes with startup environments, including product management, project management, product marketing, customer interaction, and collateral creation. ### Career Goals He's seeking a role where he can be at the forefront of technological change, particularly in the AI and data analytics space. The candidate wants to work on products that tackle problems many companies are facing and is looking for opportunities with significant growth potential. He's specifically interested in companies that have already achieved product-market fit and are focused on scaling and growth rather than still searching for initial traction. ### Compensation and Timeline While specific compensation details weren't discussed, the candidate indicated he started actively looking about a month ago and is ready to move quickly if the right opportunity presents itself. He's being selective with opportunities and is only pursuing roles he finds genuinely interesting. ## Questions/Concerns The candidate asked about the biggest challenges the company faces over the next 12 months, showing interest in understanding the strategic obstacles ahead. He inquired about what success would look like for the role in 6-12 months, demonstrating his focus on measurable outcomes and clear expectations. He also showed technical curiosity about the company's data integration capabilities, asking about how data is currently pulled (learning it's SQL-only) and expressing understanding of why customers would want broader data source integration. The candidate asked about the size of the engineering team (8 engineers) and seemed comfortable with that scale based on his previous experience. His questions revealed genuine interest in the role, with him stating that this opportunity is "definitely top of my list" among the selective opportunities he's pursuing. He expressed enthusiasm about the technical challenges, particularly around solving UX problems in the AI analytics space where there are no established patterns to follow.
مراجعة مشاريع التطوير
## الملخص ### **رؤية شركة آر هومز وإنجازاتها** تم تأسيس شركة آر هومز قبل ثلاث سنوات برؤية واضحة لبناء 100 وحدة سكنية، مع التركيز على مدينتي الرياض والمدينة المنورة. **نجحت الشركة في تحقيق هدفها المنشود** حيث وصل إجمالي الوحدات إلى 99 وحدة موزعة على المشروعين. - **مشاريع الرياض**: تضم 10 مشاريع (R1 إلى R10) بإجمالي 52 وحدة سكنية، تتراوح أحجامها من وحدة واحدة إلى 12 وحدة لكل مشروع - **مشاريع المدينة المنورة**: تشمل 5 مشاريع (R11 إلى R15) بإجمالي 47 وحدة سكنية، بأحجام تتراوح من 3 إلى 12 وحدة لكل مشروع ### **الاحتياجات المالية للمشاريع الحالية** تحتاج الشركة إلى **15 مليون ريال** لإكمال جميع مشاريعها الحالية، موزعة كالتالي: - **مشاريع الرياض**: تحتاج حوالي 7.5 مليون ريال لإكمال البناء والتشطيبات - **مشاريع المدينة المنورة**: تتطلب حوالي 8 مليون ريال للانتهاء من جميع المراحل - **الأولوية الفورية**: الحاجة إلى 1-2 مليون ريال خلال الأشهر الستة القادمة لضمان استمرارية العمل ### **مشروع أطواد الفندقي** يُعتبر مشروع فندق سديم من أكبر المشاريع الاستراتيجية للشركة، حيث يضم **120 وحدة فندقية** بتكلفة إجمالية تقدر بحوالي 32 مليون ريال. - **مرحلة الحفر والأساسات**: تكلفة 100 ألف ريال للحفر و6.5 مليون ريال للهيكل الخرساني - **الإشراف والمتنوعات**: 50 ألف ريال للإشراف و350 ألف ريال للمصاريف المتنوعة - **التشطيبات النهائية**: تقدر بحوالي 25 مليون ريال لإكمال جميع التشطيبات والتجهيزات الفندقية ### **مشروع آر سويتس السكني** حصل المشروع على **تصريح البناء** وبدأت أعمال الحفر والأساسات بتكلفة 1.5 مليون ريال. يُعتبر هذا المشروع من المشاريع الواعدة التي ستضيف قيمة كبيرة لمحفظة الشركة العقارية. ### **مصنع لين للإنتاج** استثمرت الشركة **7 مليون ريال** في إنشاء مصنع لين، مع التزامات مالية مستمرة تشمل: - **مدفوعات شركة سيمكو**: 1.6 مليون ريال على مدى 12 شهراً كدفعات شهرية منتظمة - **المصاريف التشغيلية**: مليون ريال للسنة الأولى، مع توقع أن تغطي الإيرادات المصاريف التشغيلية بعد ذلك ### **مشاريع مديني المتنوعة** تطور الشركة عدة مشاريع تحت مظلة مديني تشمل: - **مشروع ميني ميد**: مجمع طبي صغير يهدف لخدمة المجتمع المحلي - **مجمع تجاري**: يضم محلات تجارية ومطاعم لتلبية احتياجات المنطقة - **مشروع فندقي**: فندق متوسط الحجم لخدمة قطاع الضيافة في المنطقة - <span data-mentioned-note="true"><h3></h3></span> ### **الاستراتيجية المالية والاستثمارية** - **إدارة التدفق النقدي**: إعطاء الأولية للمشاريع القائمة (آر هومز الرياض وسديم هوتيل المدينة) مع إدارة السيولة لضمان استمرارية العمليات - **التمويل الخارجي**. هذا أمر ضروري خاصة مع احتياج الشركة الفوري لـ **1-2 مليون ريال** خلال الأشهر القادمة لضمان استمرارية المشاريع.
Daily Progress Meeting
## Summary The discussion focused on reviewing progress and providing feedback on a project, particularly addressing content structure and engagement. Key points included acknowledging successful edits based on previous feedback, identifying areas for refinement in specific chapters, and emphasizing the need to enhance the introduction for better viewer retention. The overall tone was constructive, with a focus on streamlining content without losing essential narrative elements. ## Wins - **Successful implementation of feedback** from previous reviews, resulting in improved content quality. - **Effective storytelling** in certain sections, which effectively painted a vivid picture of events. ## Issues - **Excessive detail** in chapters 3 and 4, risking viewer disengagement due to length. - **Introduction lacks "punch"**—while functional, it needs stronger hooks to maintain attention in a long-form video. ## Commitments - **Review and reduce content** in chapters 3 and 4 to eliminate non-essential details. - **Refine the introduction** to create a more compelling opening. - **Address specific comments** left in the document for further polishing.
OpenAI Overview
## Brand Overview Salam Cola is a purpose-driven soft drink company founded to support children in conflict zones and impoverished regions. Each flavor represents a specific country and channels profits into targeted humanitarian efforts. The brand has achieved rapid international expansion, reaching 18 countries within 16 months, driven by strong consumer demand for its ethical business model. ### Core Mission and Social Impact The company prioritizes humanitarian aid over profit: - **Education initiatives**: Opened a school for 700 Palestinian children displaced from Gaza to Egypt. - **Rebuilding infrastructure**: Reconstructed a bombed mosque in Syria to restore community spaces. - **Emergency aid**: Conducted medical aid, cash distributions, and malnutrition relief in Yemen, Lebanon, and Syria. - **Orphan sponsorship**: Provides ongoing support for orphans in war-torn regions through partnerships with local organizations. ### Market Position and Growth Salam Cola’s rapid scaling highlights untapped demand for socially conscious products: - Operates as a kitchen-to-global enterprise, maintaining agility despite limited resources. - International success demonstrates consumer willingness to support brands with transparent charitable ties. - Current challenges include managing supply chain scalability to meet growing demand across 18 countries. ### Cultural Relevance The brand addresses contemporary societal needs: - Fills a moral void by creating tangible impact during global crises. - Appeals to socially aware demographics through cause-linked product lines (e.g., Syria-themed drinks funding reconstruction). - Humanizes corporate responsibility by directly connecting purchases to specific beneficiaries (e.g., "X flavor supports Y orphanage"). ## Future Implications Salam Cola’s model presents a blueprint for ethical entrepreneurship: - Proves commercial viability of businesses prioritizing social impact over shareholder profits. - Highlights market opportunities in combining cultural identity with charitable giving (e.g., region-specific flavors funding localized aid). - Exposes critical infrastructure gaps in conflict zones that private initiatives can address faster than governmental/NGO channels.
Teams Review
## Background The candidate is currently a Senior Product Manager at Bill, where they manage the Finmark application and integrated features following its acquisition. Their career highlights include cross-functional collaboration with engineering, design, and stakeholder teams to drive product roadmaps. They hold permanent residency in the U.S., enabling unrestricted work eligibility. ## Work Experience Over the past eight months, the candidate has faced resource constraints at Bill, limiting opportunities for meaningful product development or new challenges. Key responsibilities include: - Leading post-acquisition integration of Finmark’s features into Bill’s platform. - Managing end-to-end product lifecycle for Finmark and related projects, coordinating with front-end/back-end engineers, QA teams, architects, and designers. - Balancing strategic roadmap planning with hands-on execution, including stakeholder alignment (internal and external) to meet business goals. Their work style emphasizes adaptability in fast-paced environments, though they seek roles with clearer growth trajectories and resource support. ## Candidate Expectations ### Compensation Targets the higher end of the $65–$72 hourly range for the contract role. ### Working Style - Prefers hybrid work arrangements (e.g., twice weekly in-office requirements). - Values roles with defined challenges and opportunities to leverage cross-functional collaboration. ### Career Goals - Seeks a position with **growth potential**, specifically citing interest in Marriott’s expanding operations and internal mobility via the Flex program. - Prioritizes roles offering learning opportunities and strategic impact over stagnant projects. ### Logistics - Requires a 2–3 week notice period before starting. - Has a pre-planned vacation during the first two weeks of August. ## Questions/Concerns - Clarified the interview timeline (2–3 weeks total process) and next steps, including three rounds: hiring manager screening, panel interview, and meet-and-greet. - Confirmed flexibility for 48-hour notice for interview scheduling. - Expressed interest in understanding Marriott’s contract-to-hire pathway and conversion likelihood. - Noted active participation in other interviews but no pending offers.
Smith AI Product Role Discussion
## Background The candidate has a robust background in product management, particularly in startup environments. They were the founding product manager at Finmark, a startup later acquired by Bill, where they transitioned to a Senior Product Manager role. Over six years at Finmark/Bill, they gained experience in scaling products and navigating the shift from a lean startup to a larger corporate structure with 3,000+ employees. Their career reflects a focus on early-stage innovation and adaptability to organizational growth phases. ## Work Experience - **Finmark (Startup Phase):** Spearheaded product strategy as the founding PM, operating in a high-ambiguity, fast-paced environment. Emphasized rapid iteration, problem-solving autonomy, and direct ownership of outcomes. - **Post-Acquisition at Bill:** Transitioned to a structured corporate setting, managing increased layers of compliance, stakeholder alignment, and slower decision-making processes. Expressed a preference for the agility of startups, where "building, testing, and pivoting" were central to their workflow. - **Key Strengths:** Thrives in environments with clear goals but minimal micromanagement, excels at navigating ambiguity, and values the ability to execute quickly without excessive pre-approval hurdles. ## Candidate Expectations ### Career Goals - Seek roles involving **cutting-edge AI technologies** and opportunities to contribute to impactful, innovative solutions. - Prioritize **long-term growth potential**, particularly in early-stage startups where equity and ownership align with career advancement. ### Working Style - Prefers **autonomy** and a lean team structure with minimal bureaucratic overhead. - Values environments where experimentation and iterative problem-solving are encouraged. - Dislikes excessive compliance layers and slow decision-making processes common in larger organizations. ### Compensation - Open to **equity-heavy packages** in early-stage startups, balancing salary with long-term financial upside. - For established companies, expects competitive base salaries (hinted range: $140k–$170k+) but remains flexible based on equity and role alignment. ## Questions/Concerns - Clarified the **geographic distribution of the product team** (current PMs split between Spain and the U.S.). - Asked about the **reporting structure** and role of the Director of Product (Kathleen) in the interview process. - Expressed interest in **equity specifics** (e.g., stock price, strike price) and how compensation balances salary with equity in a bootstrapped company. - Sought transparency on **company culture** and how compensation fairness is maintained across global teams.
YouTube Channel Voice Actor
## Background The candidate has been a professional voice actor for four years, working across audiobooks, corporate promos, commercials, and YouTube content. With a focus on YouTube channels, they’ve contributed to thousands of videos, including viral successes, and have collaborated with clients in niches like sports, history, travel, and tech. Their background includes formal outdoor education through a high school program in Colorado, emphasizing backcountry skills, navigation, and safety—experience that aligns with the channel’s focus on outdoor disasters. --- ## Work Experience - **YouTube Specialization**: The candidate’s primary expertise lies in YouTube content production, having managed workflows via platforms like Trello. They emphasize consistency in tone, pacing, and delivery, adapting to channel-specific styles after initial revisions. - **Client Engagement**: Typically works with clients for 3–12 months, averaging six-month partnerships. Projects range from 1,500–2,000 words for 8–15 minute videos, though they’ve handled longer scripts (up to 5,000 words) with adjusted pricing. - **Production Process**: Contracts include a two-business-day turnaround (often delivering next-day) and use Upwork’s milestone system for payments. They avoid off-platform transactions to comply with Upwork’s terms. - **Technical Skills**: Proficient in studio recording, editing, and revisions, with a focus on minimizing client bottlenecks. --- ## Candidate Expectations ### Compensation - Prefers a tiered rate structure based on script length, with a baseline minimum to account for studio time and preparation. Open to adjusting pricing for consistent 3,000–4,000-word scripts but emphasizes that longer scripts require proportionally higher rates due to increased production time. ### Working Style - Requires scripts to be submitted via Upwork with milestones funded in escrow. Does not accommodate monthly invoicing or off-platform payments. - Relies on clear style guidelines upfront (e.g., tone, pacing) to ensure consistency, with adjustments made during initial revisions. ### Contract Terms - Willing to review a non-compete clause for outdoor disaster/survival niches but cannot commit without evaluating specifics. Assures no current overlapping clients in this niche. --- ## Questions/Concerns 1. **Non-Compete Scope**: Sought clarity on whether the non-compete would apply broadly (e.g., all outdoor or disaster content) or narrowly (outdoor disasters only). Requested to review the contract wording before committing. 2. **Payment Structure**: Expressed hesitation about simplifying rates to a flat per-word fee, citing the nonlinear relationship between script length and production effort. Proposed adjusting baseline minimums instead. 3. **Workflow Compatibility**: Highlighted inflexibility on payment timing (strict milestone-per-script model vs. monthly aggregation), emphasizing Upwork’s escrow system as non-negotiable.
Capital One Shopping PM Roles
## Background The candidate has a dynamic career spanning startups, acquisitions, and consulting. They began as a founding Product Manager at Finmark, a YC-backed startup, where they built the product from scratch and scaled the customer base to thousands of startups and SMBs over two and a half years. Post-acquisition by Bill.com, they transitioned to a Senior Product Manager role, overseeing Finmark’s integration into Bill’s platform and leading AI-driven projects, including a chatbot for financial data interaction. Prior to Finmark, they worked in consulting, managing product delivery across diverse clients and sectors, which provided a foundation for their transition into startup leadership. ## Work Experience - **Founding PM at Finmark**: Led 15 engineers as the sole product manager, driving the product from concept to acquisition. Focused on financial planning tools for startups, emphasizing scalability and user-centric design. - **Post-Acquisition Integration at Bill.com**: Owned the integration of Finmark’s features into Bill’s platform while maintaining the live product. Spearheaded an AI chatbot initiative that evolved from MVP to a dedicated team effort, unlocking new avenues for user interaction with financial data. - **Consulting Background**: Developed adaptability by managing cross-sector projects, which later informed their approach to building Finmark’s product strategy and execution. ## Candidate Expectations ### Compensation & Career Growth - Seeks roles with clear growth trajectories, including opportunities to scale teams and own strategic initiatives. Expressed interest in transitioning from individual contributor to leadership, with a focus on expanding responsibilities (e.g., managing engineers, analysts, or junior PMs). - Prefers flexibility in compensation structure and location leveling, with openness to discussions around equity, bonuses, and remote work arrangements. ### Working Style - Values autonomy and ownership, particularly in ambiguous, 0-to-1 environments. Emphasizes data-driven decision-making (SQL, APIs) and comfort with technical collaboration. - Interested in roles balancing internal tooling (e.g., CRM systems) and external-facing projects (e.g., advertiser integrations). ### Role Specifics - Leaning toward roles involving **scaling existing platforms** (e.g., automating affiliate network partnerships) or **building new solutions** (e.g., replicating tracking/reporting systems for direct advertiser relationships). - Prioritizes visibility with leadership and cross-functional collaboration, particularly in hybrid startup-corporate environments. ## Questions/Concerns - **Hiring Process**: Clarified the timeline (1–2 months), interview structure (case studies, “power day” with four interviews), and preparation expectations. - **Team Dynamics**: Asked about team size (lean structure: 3 PMs, 6 engineers), growth potential, and alignment with Capital One Shopping’s startup-like culture. - **Remote Work**: Confirmed fully remote flexibility but noted occasional travel requirements (e.g., client meetings, agency partnerships). - **Role Clarity**: Sought details on the distinction between two open positions (scaling vs. building) and their long-term impact on the business.
Teams Review
## Background The candidate has a strong background in product management, particularly in developing AI-driven solutions and financial tools. Their experience includes leading significant projects at Finmark, where they focused on integrating AI to enhance data engagement and customizable forecasting. A key highlight is their role in pioneering a chatbot prototype using OpenAI’s GPT-3.5 API, which later evolved into a core feature across product offerings. Their career reflects a blend of technical innovation and customer-centric product development, with a focus on solving complex financial and operational challenges through technology. ## Work Experience - **AI-Driven Product Development**: Spearheaded a hackathon project at Finmark to prototype a chatbot capable of generating customizable financial reports and answering data queries. This involved rapid iteration with engineering teams, addressing challenges like AI hallucination and safeguarding against misuse. The project laid the groundwork for a scalable, user-friendly data interaction tool. - **Revenue Forecasting Overhaul**: Identified limitations in existing forecasting methods and led a cross-functional initiative to build a spreadsheet-like interface for customizable revenue forecasting. Phased implementation included MVP testing, user feedback integration, and a 20–25% increase in customer acquisition post-launch. - **Operational Efficiency**: Integrated LLMs into operational workflows to reduce processing time, demonstrating a practical approach to leveraging AI for scalability. Collaborated closely with sales and engineering teams to align product features with market demands. ## Candidate Expectations ### **Working Style** - Prefers collaborative environments with direct access to customer feedback and cross-functional teams (sales, engineering, design). - Values iterative development, emphasizing MVP testing and phased rollouts to validate concepts. ### **Career Goals** - Aims to work on products that combine AI with operational or financial data to drive measurable impact (e.g., cost reduction, efficiency gains). - Interested in roles that balance strategic vision with hands-on problem-solving, particularly in scaling SaaS platforms. ### **Innovation and Customization** - Prioritizes building flexible, user-centric tools that replicate spreadsheet-like customization within applications. - Seeks opportunities to push AI integration beyond novelty, focusing on solving tangible business problems (e.g., reducing late fees via automated workflows). ## Questions/Concerns - **KPIs and Success Metrics**: Asked about the primary success metrics for the role, emphasizing interest in measurable outcomes like transaction accuracy and operational efficiency. - **Team Dynamics**: Inquired about team size, structure, and geographic distribution to assess collaboration workflows. - **Pre-Sales Involvement**: Explored the extent of product management’s role in pre-sales activities, indicating a preference for direct customer engagement to shape product roadmaps. - **Competitive Differentiation**: Showed interest in understanding how the company plans to differentiate its platform from competitors, particularly through AI and customer experience.
Video Editor Interview Recap
## Background The candidate began their video editing career during the COVID-19 lockdown in 2020, initially creating gaming montages and sharing them on social media. This hobby evolved into freelance work after clients reached out organically. Over the past four years, they’ve transitioned between freelance and corporate roles, recently leaving a full-time agency position to return to freelancing. Their work spans documentaries, ads, gaming content, and commercial projects, with a focus on storytelling as the foundation of their editing approach. ## Work Experience - **Diverse Clientele**: Edited videos for brands like Zerodha (financial markets content), Royal Challengers Bangalore (IPL-related content), and AVTV, often collaborating through agencies. - **Style Versatility**: Worked across multiple genres, including documentaries, product commercials, talking-head videos, and animated content. Adapts to varying client needs without being confined to a single niche. - **Freelance Focus**: Prioritizes understanding client vision through detailed scripts and creative direction, emphasizing pacing and narrative flow. Recently shifted back to freelancing after a year in a corporate role, citing flexibility and creative autonomy as key drivers. - **Technical Skills**: Proficient in motion graphics, animation, and audio design, though noted a learning curve in integrating AI-generated footage for serious-toned content. ## Candidate Expectations ### Compensation - Initially proposed a rate of **$8–10 per minute** but negotiated to **$6 per minute** during the discussion, acknowledging the project’s long-term potential and learning opportunities. ### Working Style - Values **clear creative direction** and iterative feedback, particularly in early collaboration stages. - Prefers structured workflows, such as editing shorter segments (e.g., 5-minute samples) for focused feedback before tackling full-length videos. ### Career Goals - Seeks to grow within a niche (e.g., disaster/survival storytelling) while maintaining flexibility to adapt to evolving industry trends. - Emphasizes continuous learning, particularly in balancing AI tools with traditional editing techniques for authentic storytelling. ## Questions/Concerns - **AI Integration**: Expressed hesitation about using AI-generated footage for serious narratives, noting its “artificial” feel compared to stock or animated content. - **Creative Alignment**: Raised the importance of understanding the project’s vision early on, especially for longer-form content (15–20 minutes), to ensure cohesive storytelling. - **Feedback Process**: Inquired about the frequency and depth of feedback cycles, particularly during onboarding, to align expectations for revisions and improvements. - **Non-Compete Agreement**: Clarified understanding of restrictions on working with competing clients in the outdoor disaster-story niche, emphasizing commitment to avoiding conflicts of interest.
Video Editor Interview & Trial
## Background The candidate has been a video editor since 2018, with experience spanning real estate, podcast ads, documentaries, and niche-specific content. They have maintained long-term collaborations with clients, including a UK politician’s documentary project. Their technical toolkit includes Adobe Premiere Pro, After Effects, and CapCut for animations and captions. Additionally, they possess SEO and digital marketing skills, leveraging tools like ChatGPT and Google Translate to bridge language gaps. ## Work Experience - **Diverse Portfolio**: Edited content across niches such as real estate, podcasts, and documentaries, emphasizing storytelling through dynamic visuals and sound design. - **Technical Proficiency**: Uses a multi-software workflow (Premiere Pro for editing, After Effects for animations, CapCut for titles) to create polished outputs. - **Collaborative Approach**: Works closely with clients to align edits with narrative tone, including adding SEO tags and optimizing titles for platforms like YouTube. - **Adaptability**: Demonstrated ability to animate static images, integrate B-roll footage, and customize edits based on feedback. A trial project (5-minute segment of a 20-minute video) was proposed to assess fit, with an emphasis on iterative feedback. ## Candidate Expectations ### Compensation - Initially proposed $12 per minute but negotiated to **$10 per minute** for the trial phase, open to renegotiation after 2–3 videos. - Prefers direct invoicing via Pioneer due to PayPal limitations in Bangladesh. ### Working Style - Relies on translation tools (Google Translate, ChatGPT) for communication but is committed to improving English proficiency. - Prefers structured feedback early on to align with project standards, anticipating reduced revisions over time. ### Career Goals - Aims to expand expertise in long-form content (15–25 minutes) focused on survival/disaster storytelling. - Interested in skill development, particularly in AI-generated footage integration (used sparingly for authenticity). ### Specific Needs - Clear scripts and voiceovers upfront to streamline editing. - Non-compete agreement acceptance for the niche (outdoor survival/disaster content) but flexibility for other genres. ## Questions/Concerns 1. **Language Barrier**: Limited spoken English fluency raised concerns about understanding nuanced feedback and collaborating with English-only team members long-term. 2. **Rate Sustainability**: The candidate questioned whether the proposed rate ($10/minute) would remain feasible for extended, high-effort projects (20+ minutes). 3. **Quality Assurance**: Emphasized that maintaining premium editing standards (e.g., sound design, pacing) justifies higher costs but expressed willingness to adapt to budget constraints. 4. **Contract Clarity**: Sought details on payment timelines, revision policies, and scope of the non-compete clause.
Scriptwriter Onboarding Meeting
## Background The candidate has a background in scriptwriting, with experience in creating content for platforms like YouTube. They are familiar with narrative structures, storytelling techniques, and adapting to different pacing styles (e.g., slow-burn vs. fast-paced content). Their work emphasizes tight, engaging writing and collaboration with teams, including feedback integration and iterative improvements. They have prior experience using tools like Trello for project management and Google Drive for collaborative workflows. ## Work Experience - **Script Development**: Focused on crafting detailed story outlines and refining scripts through iterative feedback. The candidate is adept at balancing creative input with client vision, ensuring alignment with project goals. - **Collaborative Process**: Works closely with editors and other writers, leveraging platforms like Slack and WhatsApp for communication. Emphasizes transparency and structured workflows, including shared folders and SOPs. - **Adaptability**: Skilled in adjusting to varying storytelling styles, particularly slower-paced narratives that prioritize depth and viewer retention. Has experience experimenting with content formats to optimize engagement. - **Technical Proficiency**: Uses tools like Square for invoicing and payment processing, streamlining administrative tasks. ## Candidate Expectations ### Compensation - Agreed rate of **$4 per 100 words**, with payments processed monthly via Square invoicing. A trial period for the first three scripts will confirm mutual fit before formalizing a long-term contract. ### Working Style - Prefers **creative freedom** within structured guidelines, with opportunities to propose story outlines and adjust based on feedback. - Values asynchronous communication (e.g., WhatsApp, email) for sharing drafts and updates, with occasional synchronous reviews for detailed feedback. - Expects access to resources like style guides, character profiles, and project management tools (e.g., Trello) to ensure consistency. ### Career Goals - Aims to collaborate on long-form, narrative-driven content that prioritizes depth over rapid pacing. Interested in refining storytelling techniques and contributing to a cohesive creative vision. ### Logistics - Open to signing an NDA post-trial period to protect intellectual property. - Anticipates a gradual reduction in oversight as workflows align, with eventual integration into a broader team structure. ## Questions/Concerns - Clarified expectations around the **trial period**, including feedback frequency and evaluation criteria for the first three scripts. - Asked about **payment logistics**, proposing Square invoicing for simplicity and transparency. - Expressed interest in understanding the **creative direction** (e.g., tone, pacing, target audience) to ensure alignment with the project’s vision. - Inquired about **team dynamics**, including collaboration with other writers and editors, and the potential for recurring meetings as the team expands.
Constellation Navigator Invoices Drop Folder
## Background The candidate has over three years of experience in product management, primarily in fintech and SaaS environments. They joined Bill through the acquisition of Finmark, a financial planning and analysis tool for startups and SMBs, where they served as the founding product manager. Their career highlights include managing product integrations post-acquisition, leading the development of a financial AI chatbot, and maintaining Finmark’s standalone product. ## Work Experience - **Founding Product Manager at Finmark**: Spearheaded the product from inception, focusing on financial planning tools for startups. Post-acquisition by Bill, continued managing the product while integrating key features into Bill’s platform. - **Product Manager at Bill**: Oversees legacy Finmark functionalities and new projects, including an AI-driven chatbot that enables interactive data analysis for users. - **Startup Mentality**: Thrives in agile, cross-functional environments, emphasizing solution-building, customer collaboration, and iterative development. - **Technical Collaboration**: Regularly works with engineering teams to challenge timelines and scope, ensuring efficient delivery without direct hands-on coding experience. ## Candidate Expectations ### Working Style - Prefers a hybrid model with **flexibility** post-onboarding: willing to commute to Baltimore once a week initially but seeks reduced frequency (e.g., every other week) after settling into the role. - Values autonomy in a startup-like environment, with opportunities to innovate and customize solutions for clients. ### Career Goals - Aims to leverage startup experience in a larger organization, focusing on scalable product development and sustainability-driven projects. - Interested in roles that blend strategic oversight with hands-on collaboration across teams. ### Logistics - Open to a **contract-to-hire** structure, with conversion expected around the six-month mark. - Geographically flexible but emphasizes minimizing commute time due to traffic challenges in the DC-Baltimore corridor. ## Questions/Concerns 1. **Team Structure**: Inquired about the size and composition of the Constellation Navigator team, including engineering and sales roles. 2. **Product Roadmap**: Asked for details on future functionality plans and customization capabilities for clients. 3. **Technical Requirements**: Expressed curiosity about the job description’s mention of Node.js expertise, though clarified that product management roles typically don’t require hands-on coding. 4. **Interview Process**: Sought clarity on the STAR method-based panel interview and how to best prepare for situational questions. 5. **Conversion Timeline**: Discussed flexibility around the six-month contract-to-hire window and alignment with long-term goals. The candidate emphasized adaptability to office attendance expectations but hinted at a preference for renegotiating frequency post-onboarding. They also highlighted a strong track record of “calling out BS” in engineering timelines, aligning with the hiring manager’s desire for assertive project oversight.
Daily Progress Meeting
## Summary The discussion focused on refining the scriptwriting process for content creation, addressing challenges faced in the first script, and planning future workflows. Key points included progress on finalizing hires for scriptwriting and editing roles, the need to balance creative pacing (slow-burn vs. fast-paced narratives), and strategies to reduce turnaround time for subsequent scripts. The conversation also highlighted the importance of aligning feedback between collaborators to avoid conflicting directions and creative fatigue. A new story concept involving moral ambiguity in a desert survival scenario was introduced, with emphasis on research, structure, and narrative depth. ## Wins - **Script Progress**: The first script has seen significant improvement, moving from initial drafts to an 80-90% finalized state. - **Team Expansion**: Two to three potential scriptwriters and editors have been shortlisted, streamlining future hiring. - **AI Integration**: Initial experiments with ChatGPT to refine conversational tone in scripts showed promise, with plans to develop a custom GPT for consistency. - **Payment Clarity**: Confirmation that compensation for the first script will be processed by the end of the current month. ## Issues - **Directional Conflicts**: Unclear creative direction and conflicting feedback between collaborators led to delays and revisions in the first script. - **Process Bottlenecks**: Overlapping edits and prolonged focus on a single script caused creative fatigue, slowing overall progress. - **Technical Hurdles**: Connectivity issues during the meeting disrupted screen-sharing and real-time collaboration. - **Moral Narrative Challenges**: The new story’s complex themes (e.g., mercy killing) require careful handling to balance factual accuracy and ethical discussion. ## Commitments - **Script Finalization**: The first script will undergo final tweaks to align tone and pacing, with a focus on minimizing future revisions. - **Next Story Outline**: A detailed outline for the desert survival story will be drafted, emphasizing character background, moral dilemmas, and structural flow. - **AI Workflow**: A shared GPT prompt will be developed to standardize language improvements while preserving narrative intent. - **Payment Timeline**: Funds for the first script will be disbursed by month-end, independent of progress on subsequent projects.
Teams Review
## Background The candidate has over eight years of product management experience, beginning with **10Pearls**, a global software consultancy, where they worked with diverse clients across fintech, healthcare, and entertainment. This role provided exposure to early-stage startups and Fortune 500 companies, laying the groundwork for their transition to **Finmark**, a financial planning tool for startups and SMBs. As Finmark’s founding product manager, they built the product from zero to thousands of users, tripled annual recurring revenue (ARR), and managed a 15-person engineering team, culminating in a successful acquisition. Currently at **Bill**, they integrate Finmark’s features into Bill’s platform, lead AI-driven projects (e.g., a financial data chatbot), and handle cross-functional initiatives. ## Work Experience - **Finmark Tenure**: Scaled the product from inception to acquisition by establishing end-to-end processes, including **release management** (increasing monthly releases from 3–4 to 12–14) and **cross-functional collaboration frameworks**. Introduced Jira for backlog prioritization, QA testing, and engineering reviews, ensuring higher output without compromising quality. - **Technical Leadership**: Managed distributed engineering teams across time zones, emphasizing iterative improvements and transparency. Advocated for Kanban workflows to enable continuous delivery. - **Post-Acquisition Integration**: At Bill, streamlined the merger of Finmark’s features into a larger platform and spearheaded an AI chatbot project that reduced support tickets and increased user engagement. - **Adaptability**: Demonstrated agility in shifting from chaotic startup environments to structured corporate settings, balancing execution with strategic alignment. ## Candidate Expectations ### Compensation - Seeks a **competitive salary** within the upper range of the posted $130–155k band, with openness to negotiation based on role alignment and growth potential. ### Career Goals - Prioritizes opportunities for **strategic influence** over time, including roadmap planning and potential leadership roles as the team expands. - Values environments that blend **execution focus** (e.g., technical delivery) with long-term product vision. ### Working Style - Thrives in **ambiguity**, leveraging experience in early-stage startups to build structure from scratch. - Emphasizes **collaborative decision-making**, iterative process changes, and open feedback loops to drive team buy-in. - Comfortable managing **globally distributed teams** (e.g., engineers across 12-hour time zones). ## Questions/Concerns - **Role Scope**: Clarified whether the position allows for strategic roadmap input beyond immediate execution. Expressed interest in evolving into a more strategic role as processes mature. - **Team Dynamics**: Inquired about engineering team locations and time zone coordination, highlighting prior experience with distributed teams. - **Hiring Timeline**: Asked about next steps, including interviews with the product/design manager and CEO, to assess cultural and operational fit. - **Product Challenges**: Sought clarity on the primary hurdles beyond process chaos, such as technical complexity and cross-departmental alignment.
UBM Product Manager - Interview
## Background The candidate has a robust career spanning multiple companies, including 10Pearls, Finmark, and Bill, with a focus on product management and collaborative roles. Their current manager is Greg Lissy, a Product Director at Bill, and they have a history of working closely with leadership, including co-founders at Finmark. References from 10Pearls and previous roles are available to validate their performance, which past managers would likely rate between 7/10 and 10/10, reflecting consistent contributions and a results-driven approach. ## Work Experience - **Recent Role (Bill):** Over the past five years, the candidate has been instrumental in driving growth and operational success, emphasizing teamwork and adaptability. They describe their work as multifaceted, involving strategic execution and hands-on problem-solving in a startup-like environment. - **Finmark Experience:** Collaborated directly with a co-founder, indicating exposure to high-stakes decision-making and scaling challenges. - **10Pearls Tenure:** Built a track record of reliability and delivery, with managers likely highlighting their ability to "wear many hats" and prioritize outcomes. Their approach centers on **pragmatism**—balancing ambitious goals with the realities of complex projects—and a focus on **delivering tangible results**, even when metrics like "3X growth" only partially capture the effort involved. ## Candidate Expectations ### Timing Considerations The candidate will require **three weeks off in late July/early August 2024** for their wedding in Saudi Arabia. While flexible with the interview process, they emphasized the need to account for this absence in onboarding or start-date discussions. ## Questions/Concerns - **Process Timeline:** Expressed interest in understanding the expected duration of the interview process to align with personal commitments. - **Reference Checks:** Clarified readiness to provide references from 10Pearls and Finmark, though noted that contacting their current manager at Bill might require sensitivity. No other concerns were raised, but the candidate highlighted openness to discuss any follow-up details.
UBM Product Manager - Interview
## Cultural Fit The candidate demonstrates strong alignment with organizations valuing **product-led growth (PLG) strategies** and **holistic ownership** of product development. Their experience building foundational systems at Finmark, where they focused on creating self-sustaining growth mechanisms, reflects a strategic mindset suited for scaling startups. While their background leans toward high-level strategy over tactical experimentation, they exhibit adaptability by integrating growth experiments into their skill set. This balance suggests compatibility with teams seeking a **strategic thinker** who can transition between vision and execution. ## Strengths - **Deep PLG Expertise**: Spearheaded end-to-end product strategies at Finmark, designing systems to organically drive user acquisition and retention. For example, they emphasized building product features that inherently encouraged growth, reducing reliance on external marketing. - **Ownership Mentality**: Repeatedly took charge of cross-functional initiatives, managing everything from product roadmaps to growth metrics in resource-constrained environments. - **Articulate Communication**: Demonstrated ability to process complex questions, structure coherent responses, and convey ideas with clarity—critical for collaborating with stakeholders. - **Preparation Rigor**: Invested significant effort in refining interview narratives, ensuring stories followed structured frameworks to highlight measurable impact. ## Weaknesses - **Limited Hands-On Experimentation**: While strategically fluent, direct experience with rapid A/B testing or iterative growth experiments appears less prominent compared to candidates from growth-specific roles (e.g., senior growth PMs at companies like HubSpot). - **Overemphasis on Strategy**: May need to consciously bridge the gap between high-level vision and tactical execution during discussions, as some roles prioritize immediate, data-driven experimentation. - **Assumed Context Awareness**: Occasionally assumed interviewers understood nuances of their past projects (e.g., Finmark’s challenges), risking incomplete storytelling without explicit elaboration on constraints or decision-making processes.
UBM Product Manager - Interview
### Cultural Fit The candidate demonstrated a strong resonance with the ownership-driven culture of the organization, particularly highlighting the company’s history of transitioning from acquisition to independence as a testament to its entrepreneurial spirit. They emphasized valuing autonomy and accountability, aligning with the company’s emphasis on self-driven initiatives. However, their articulation of cultural alignment lacked depth, relying on inferred assumptions about the company’s values rather than citing specific research or examples from interactions with current employees. ### Strengths - **Proven Expertise in Growth and Monetization**: The candidate showcased a track record of driving impactful initiatives, such as developing a **customizable revenue forecasting feature** at Finmark, which increased user acquisition and activation by 15%. This involved cross-functional collaboration with sales, design, and engineering teams to address user needs. - **Problem-Solving in B2B/B2C Contexts**: They highlighted experience in both B2B and B2C environments, including a creative solution to reduce user drop-off by introducing a **spreadsheet template** for expense tracking—a counterintuitive but effective fix that streamlined onboarding. - **Strategic Prioritization**: Demonstrated ability to balance short-term wins (e.g., MVP development) with long-term product vision, such as iterating on features post-launch to expand their impact across the platform. ### Weaknesses - **Communication and Delivery**: Feedback noted a **robotic tone** and lack of vocal modulation during storytelling, which reduced engagement. For example, their explanation of the custom formula feature was described as overly technical and lacking narrative flow. - **Over-Reliance on Assumptions**: When discussing cultural fit, the candidate leaned on speculative insights about the company’s values rather than concrete research, suggesting a need for deeper due diligence. - **Inconsistent Structured Thinking**: In responses to behavioral questions (e.g., describing a failed growth initiative), the candidate occasionally meandered, failing to crisply articulate lessons learned or pivot strategies without prompting.
Teams Review
### Background The candidate has a dynamic career spanning product management roles in fintech and SaaS, with a focus on financial planning tools and platform integrations. Starting as a founding product manager at Finmark, they built the core product from scratch, specializing in financial planning and analysis for startups. Post-acquisition by Bill, they transitioned to managing integrations and scaling product features across a unified platform. Their career shift from Germany to Portugal post-COVID influenced their move toward remote-friendly, mission-driven work, aligning with Kagi’s ethos of decentralised, user-centric innovation. ### Work Experience - **Finmark (Pre- and Post-Acquisition):** As the sole product manager, they developed Finmark’s financial forecasting tool, replacing manual Excel processes with automated, collaborative solutions. Key achievements include: - Designing a **custom variables feature** (an Excel-like interface for financial modeling), which increased customer acquisition by 15% by catering to advanced users. - Leading post-acquisition integration of Finmark’s features into Bill’s unified platform, balancing legacy product maintenance with new development. - Streamlining cross-functional workflows, improving project efficiency by 20% through process optimizations and KPIs. - **Bill Platform Expansion:** Oversaw the development of an AI-powered chatbot (later transitioned to another team due to organizational politics) and managed dependencies across teams, advocating for structured quarterly planning to resolve resource conflicts. - **Leadership Style:** Emphasizes hands-on collaboration with engineering and design teams, user research, and iterative launches. Successfully navigated challenges like delayed deliverables by pairing engineers and escalating resource allocation issues. ### Candidate Expectations **Working Style:** - Prefers environments with minimal bureaucratic layers, where ownership and impact are prioritized over administrative tasks. - Thrives in ambiguity, enjoys end-to-end responsibility, and seeks roles with direct product influence. **Career Goals:** - Aims to work on transformative product challenges, particularly in accessibility and integration (e.g., making niche tools like Kagi Search mainstream). - Values mission-driven companies with engaged communities and ethical data practices. **Avoidances:** - Disinterested in roles where internal politics or territorial ownership disputes dominate (>50% of time). ### Questions/Concerns - **Product Challenges at Kagi:** Raised questions about balancing niche appeal with broader accessibility, particularly overcoming technical barriers for non-technical users (e.g., Safari integration hurdles). - **Community Trust:** Highlighted risks of vocal minority narratives impacting brand reputation and the importance of maintaining transparency. - **Decision-Making:** Inquired about Kagi’s approach to product changes without traditional telemetry, relying on community feedback and intuition. - **Success Metrics:** Explored expectations for the role, emphasizing delivery of impact in ambiguous, resource-constrained environments.
UBM Product Manager - Interview
### Summary The discussion focused on preparing for two upcoming interviews with hiring managers for roles in growth/monetization and B2B product management. The interviews are scheduled for the following week, with an emphasis on behavioral questions and potential product sense case studies. Key areas of preparation include refining the candidate’s introduction, structuring responses using frameworks like STAR (Situation, Task, Action, Result), and practicing product design scenarios. The mentor provided a detailed framework for tackling product sense questions, breaking down the process into stages: - **Identifying the problem and setting goals** (user, business, and product objectives). - **Segmenting users** (e.g., prioritizing 16–18-year-olds for an “Uber for Kids” service due to maturity and lower risk). - **Proposing solutions** (balancing short-term practicality with visionary ideas). - **Prioritizing features** using frameworks like MoSCoW (Must-have, Should-have, Could-have, Won’t-have). - **Measuring success** through North Star metrics, supporting metrics, and guardrail metrics. The candidate’s introduction was workshopped to highlight career progression, achievements (e.g., scaling a startup’s ARR by 3x), and alignment with the target roles. Emphasis was placed on avoiding negativity and framing motivations around growth, learning, and cultural fit. ### Wins - A **structured interview preparation plan** was established, including timelines for drafting behavioral stories and practicing delivery. - Clear **frameworks for product sense questions** were provided, enabling the candidate to approach ambiguous problems systematically. - The mentor committed to creating a **tailored question bank** based on the job descriptions, ensuring alignment with each role’s requirements. - Progress was made on refining the candidate’s **self-introduction**, with actionable feedback to emphasize relevance and positivity. ### Issues - The candidate lacks prior experience with **product sense case studies**, requiring focused practice to internalize the structured approach. - The initial self-introduction needed adjustments to avoid vague statements and better align with the target roles’ expectations. - Time constraints were noted, with interviews scheduled within a week, necessitating efficient preparation. ### Commitments - The candidate will share **job descriptions** for both roles to enable the creation of customized behavioral questions. - The mentor will provide a **product sense case study document** and a behavioral question bank by the next day. - Follow-up sessions are planned to **practice delivery** of behavioral answers and product sense scenarios, ensuring clarity and confidence. - The candidate will draft and refine **behavioral stories** using the STAR method, focusing on past product management experiences.
UBM Product Manager - Interview
### Background The candidate has built a career spanning product management roles in startups and consulting, with a focus on financial technology and SaaS solutions. Beginning as a founding product manager at Finmark (a YC-backed startup), they played a pivotal role in developing the MVP, achieving product-market fit, and facilitating the company’s acquisition within two years. Prior to this, they worked in software consulting as a product delivery manager, managing cross-sector projects in healthcare, fintech, and media. Their transition to Bill as a senior product manager involved integrating Finmark’s product, maintaining its application, and leading AI-driven financial tools. ### Work Experience - **Finmark**: As the first product hire, the candidate spearheaded MVP development for a financial planning tool targeting startups and SMBs. They prioritized rapid value delivery, compressing a projected six-month timeline to meet urgent market needs. Post-acquisition, they ensured seamless integration into Bill’s ecosystem. - **Bill**: Focused on sustaining Finmark’s product post-acquisition while innovating features like an AI assistant for financial data analysis. Handled ad-hoc strategic projects, emphasizing scalability and user retention. - **Consulting**: Earlier roles involved end-to-end product delivery for diverse clients, blending product management with account and delivery leadership. This phase honed skills in requirements gathering, stakeholder alignment, and cross-functional execution. ### Candidate Expectations #### Career Goals - Seeks roles offering strategic influence, particularly in AI-driven or global product environments. Expressed interest in transitioning from execution-focused positions to high-impact, mission-aligned opportunities. #### Cultural & Work Style - Values customer-centric cultures and entrepreneurial mindsets, as evidenced by their preference for zero-to-one product building and startup environments. Emphasizes alignment with companies that prioritize innovation, ownership, and data-driven decision-making. #### Professional Development - Implicitly prioritizes environments fostering continuous learning, as reflected in their engagement with interview coaching focused on leadership principles (e.g., Amazon’s tenets like *customer obsession* and *bias for action*). ### Questions/Concerns - Inquired about the interviewer’s experience mentoring growth product managers and outcomes of recent collaborations. - Explored logistics for potential coaching packages, including session structure, pricing ($177/week for six sessions), and payment methods (Wise transfers). - Raised implicit concerns about interview performance, particularly around conciseness, storytelling, and aligning responses with cultural values like *think big* and *dive deep*.
Review Mapping Location Logic
# Property Portfolio Review --- ## **Common Drive Property** **Leasing Challenges** The property faces difficulties leasing two vacant 900 sq ft retail spaces due to its tertiary location and small size, making it less attractive for brokers (6% commission structure yields low returns). A new leasing agent from Dominion Commercial was engaged in October 2023, offering future sale agency incentives to expedite leasing. **Financial & Market Outlook** - Purchased for **$1.7M** with stabilized cash-on-cash returns of **7–8%**, but current vacancies halt distributions. - Plans to sell after leasing, though stagnant rents and higher interest rates may limit valuation gains compared to 2020/2021 market conditions. --- ## **Fairfield Medical Building** **Tenant Updates** - **Retina Center**: Secured a 5-year lease extension through 2029 with rent increases. - **Hospital Tenant**: Lease expires April 2025 with a 15% CPI-based renewal option. Proposal to negotiate a **10-year extension** with a reduced rent hike (5–8%) to enhance long-term stability. **Performance** Consistently delivers **8% returns** with no missed payments. Positioned for a potential sale post-lease renegotiations to capitalize on peak valuation. --- ## **Guidepost Montessori Property** **Tenant Crisis** - Tenant pays reduced rent ($28k/month vs. $38k originally), covering loan payments but leaving minimal reserves ($21k/month mortgage). - Financial instability persists: corporate leadership changes, unprofitable locations, and monthly financial disclosures show ongoing struggles. **Contingency Planning** - **Cash Reserves**: 2.5 months of mortgage payments saved. - **Replacement Tenant Options**: - *Learning Experience*: Offers $34/sq ft (below current $38/sq ft) but demands **$750K–$1M** in tenant improvements. - Local preschool operator: Expressed interest but no formal LOI. **Critical Juncture** Full rent payments resume in **January 2026**, posing a liquidity risk if tenant defaults. Current strategy focuses on monitoring payments and preparing for potential bankruptcy fallout. --- *Note: All properties face location-driven risks, emphasizing the importance of tenant diversification and lease renegotiation strategies.*
Simon Script
# 1-on-1 Meeting Summary ## Summary The discussion focused on refining a YouTube script about hiker Gerald "Otter" Largay’s tragic story. Key feedback included: - **Adding Narrative Depth**: While factual accuracy is strong, the script needs more vivid storytelling to paint Otter’s background (e.g., his childhood in New York’s woods) and heighten emotional impact. - **Tightening the Script**: The current draft is lengthy. The goal is to streamline content for engagement without sacrificing critical details, balancing slow-paced storytelling with concise writing. - **Structural Clarity**: While the chapter flow works, clearer transitions and mini-hooks at chapter ends could improve viewer retention. - **Language Adjustments**: Shift from British to American English terms (e.g., “font” → “typeface”) and simplify overly formal phrasing for conversational narration. --- ## Wins - **Strong Research Foundation**: The script’s factual backbone was praised for thoroughness and accuracy. - **Creative Direction**: Visual cues (e.g., fade-to-black transitions, sound design notes) were highlighted as effective storytelling tools. - **Positive Peer Feedback**: A collaborator acknowledged the script’s potential, suggesting minor trimming for pacing. --- ## Issues 1. **Pacing & Engagement** - Overly detailed sections risk losing viewer interest. - Need to balance Otter’s background with faster entry into the main story. 2. **Terminology & Language** - Some terms (e.g., “SPOT device”) require brief explanations for general audiences. - Formal phrasing disrupts conversational flow. 3. **Structural Gaps** - Chapters lack clear hooks to maintain suspense. - Missing context for hiking milestones (e.g., “Triple Crown” significance). --- ## Commitments - **Revise Chapter 1**: Add vivid details about Otter’s upbringing and contextualize his hiking achievements. - **Trim Redundant Content**: Tighten journal excerpts and streamline search-and-rescue sequences. - **Incorporate Hooks**: Add cliffhangers or questions at chapter ends to sustain engagement. - **Language Polish**: Adjust terminology for clarity and conversational tone.
UBM Product Manager - Interview
# Career Coaching Discussion Summary ## Action Items The prospect needs to decide whether to invest in the career coaching program for assistance with job search, resume optimization, and interview preparation. Decision to be made within a week based on budget considerations and alignment with career goals. ## Prospect The prospect is a senior product manager with extensive experience who recently took time off after resigning from Bill. Their career began in consulting as a business analyst before shifting to product management. They have significant experience building products from the ground up and scaling teams, having been the first team member to join Finmark (a YC startup) after the co-founders. The prospect is qualified for senior product management roles with salary expectations of $150K+. ## Company The prospect previously worked at Bill as a senior product manager following Finmark's acquisition in late 2022. They left Bill due to feeling stagnant in their career, lack of learning opportunities, and excessive bureaucracy in the large organization of 3-4 thousand employees. Prior to Bill, they worked at Temporal as a consultant where they managed projects end-to-end in various roles including product manager, project manager, and account manager. ## Priorities The prospect is seeking: - A product management role with compensation of $150K+ - Remote work opportunities (or positions in the DC metro area) - Meaningful work with social impact where their contributions directly affect end users - Landing a position within 2-3 months due to financial considerations - Overcoming current job search challenges after sending 50-100 applications with minimal response - Professional guidance to improve their approach to the job market after struggling with the application process
UBM Product Manager - Interview
# Job Interview Summary ## Background The candidate was selected as a top contender from a pool of 10 applicants for a scriptwriting role. His previous work samples and interpersonal interactions demonstrated strong potential, leading to his prioritization for this position. ## Work Experience - **Freelance Scriptwriting Expertise**: Recent years have focused on scriptwriting, with proven ability to deliver high-quality work aligned with client needs. - **Tool Proficiency**: Experienced with Trello, Slack, Discord, and Monday for project management, preferring streamlined workflows to minimize platform noise. - **Collaborative Approach**: Emphasizes adaptability and iterative improvement, viewing SOPs as “living documents” that evolve with team feedback. ## Candidate Expectations ### Working Style - Prefers **off-Upwork collaboration** to reduce fees and administrative friction. - Seeks monthly payments via direct bank transfers for simplicity. - Open to weekly check-in meetings for alignment and progress tracking. ### Tools & Workflow - **Communication**: Slack for day-to-day coordination, WhatsApp for urgent matters. - **Project Management**: Trello for task tracking and Google Drive for document sharing. ### Career Goals - Prioritizes long-term collaboration with opportunities for skill development (e.g., access to advanced scriptwriting courses and expert feedback). - Aims to contribute to scalable processes as the team grows. ## Questions/Concerns - Clarified preferred tools (Trello vs. Discord, payment workflows). - Emphasized trust in off-platform collaboration but sought clarity on contract terms and non-compete clauses. - Expressed interest in understanding team structure and editorial timelines. --- *Note: The candidate conveyed enthusiasm for the role, aligning with the company’s growth-focused vision.*
Faisal <> Sunny
# Interview Summary ## Background The candidate is an experienced freelance scriptwriter and editor specializing in documentary-style content, video essays, and behind-the-scenes features for entertainment and outdoor niches. Their career highlights include: - Collaborating with **Designing Hollywood** on projects like *Oppenheimer* to create narrative-driven behind-the-scenes content. - Writing for YouTube channels in entertainment (e.g., John Campia) and upcoming sports projects. - Producing independent docu-series and comedy content focused on everyday minutiae and outdoor/nature themes. ## Work Experience - **Content Focus**: Combines storytelling with suspense to engage viewers, emphasizing YouTube’s unique pacing (e.g., balancing slow-burn narratives with quick payoffs). - **Process**: Works 12–15 hours daily across 2–3 projects, prioritizing writing in mornings and client calls/editing later. - **Style**: Avoids AI-generated content, emphasizing originality and human creativity. - **Recent Work**: Expanding into outdoor survival and disaster stories, aligning with personal interests in hiking, rock climbing, and nature. ## Candidate Expectations ### Compensation - Open to negotiation but proposed a rate in the **$400 range** per project. ### Working Style - Prefers **full remote work** with flexible hours. - Seeks minimal oversight once established, aiming to "own" the scripting process. - Values collaboration with experienced scriptwriters for quality control. ### Career Goals - Long-term commitment to freelance/remote work to avoid traditional office environments. - Aspires to write "big video essays" and case studies beyond entertainment. ### Specific Needs - Willing to sign an **NDA and non-compete** (limited to outdoor disaster niches). ## Questions/Concerns 1. **Channel Vision**: Asked about the target tone for YouTube scripts and examples of successful styles in the mystery/outdoor niche. 2. **Success Metrics**: Clarified expectations for ownership, output (4 videos/month), and scalability. 3. **AI Concerns**: Strongly opposes AI use in writing, emphasizing authenticity. 4. **Project Viability**: Expressed interest in the channel’s growth potential and long-term creative freedom. ## Next Steps - The interviewer plans to decide within a week, considering the candidate among the top three. - Potential follow-up roles include onboarding a second writer for scalability. The candidate’s blend of storytelling expertise, passion for outdoor themes, and anti-corporate mindset align with the channel’s vision for authentic, suspense-driven content.
Faisal <> Sunny
# Job Interview Summary ## Background The candidate began their career writing manga scripts before transitioning to YouTube scriptwriting five years ago. They are currently pursuing a **Bachelor of Business Administration (BBA)** at the University of North Texas (senior year) and have developed expertise in diverse content genres, particularly **documentary-style "How It’s Made" scripts**. Their work spans over 100 projects, showcasing adaptability across niches. ## Work Experience - **YouTube Scriptwriting**: 5+ years of experience, specializing in research-driven storytelling. - **Niche Expertise**: Focused on "How It’s Made" content but open to outdoor/adventure themes (e.g., survival stories, natural disasters). - **Project Volume**: Handles high workloads, recently completing multiple projects and submitting final deliverables before availability. - **Writing Style**: Prioritizes clear, engaging narratives that are easy to edit for video production. Received feedback on improving hook efficiency in introductions. - **Collaboration**: Comfortable with iterative feedback and mentorship from senior scriptwriters. ## Candidate Expectations ### Working Style - Prefers **creative freedom** but values clear guidelines and constructive criticism. - Open to signing NDAs and adhering to non-compete agreements within the outdoor disaster niche. - Seeks a **long-term commitment** with opportunities for growth and ownership of projects. ### Career Goals - Plans to explore the hospitality industry post-graduation, ideally combining it with adventure tourism. - Views content writing as a potential career path if aligned with personal interests (e.g., outdoor themes). ### Availability - Flexible schedule: University classes end by 4–5 PM daily, leaving evenings/weekends open for work. - Can dedicate time to weekly script deadlines after onboarding. ## Questions/Concerns - No major concerns raised. The candidate emphasized understanding the need for NDAs and non-compete terms. - Expressed enthusiasm for blending storytelling with outdoor/adventure topics.
Simon Script
# Interview Summary ## Background The candidate is pursuing a medical degree while working part-time as a YouTube scriptwriter. With experience dating back to 2021, they have written scripts across niches such as horror, finance, technology, history, and politics. Their focus is on storytelling, balancing scriptwriting with academic commitments. ## Work Experience - **Experience**: Started YouTube scriptwriting in 2021, working with channels like *Untamed* and *Witnessed*. - **Niche Diversity**: Avoids sports but covers horror, finance, technology, history, and political storytelling. - **Content Types**: Specializes in narrative-driven scripts (e.g., animal attacks, real-life events) and listicle-style videos. - **Workflow**: Prefers detailed research and immersive storytelling over rushed output. Left *Untamed* due to competitive task allocation favoring speed over quality. - **Current Projects**: Creates scripts for *Witnessed* (real-life moments captured on camera) and political content. ## Candidate Expectations ### Working Style - Requires adequate time for deep research and script refinement. - Open to feedback and iterative improvements. - Comfortable with flexible script lengths (2,000–4,000 words). ### Career Goals - **Short-Term**: Balance scriptwriting with medical studies. - **Long-Term**: Launch and manage sustainable YouTube channels, avoiding oversaturated niches. - **Content Vision**: Focus on evergreen niches with stable audiences rather than trending topics. ### Compensation & Needs - Seeks stable, long-term opportunities to diversify income streams. - Prioritizes roles allowing creativity and narrative depth. ## Questions/Concerns - Clarified the channel’s content strategy (real vs. fictional stories). - Asked about video length flexibility and upload frequency (goal: 1 video/week). - Expressed interest in understanding the channel’s growth plans and team structure. - Confirmed no AI usage is permitted for scriptwriting.