What we see: No alternative named. No mention of GIS consultancies, manual assessments, spreadsheet-based planning, or existing tools that cities use today for climate risk analysis.
Buyer thinking:
"I already work with consultants for this. Why would I add a platform on top of what I have?"
Without a named alternative, the buyer cannot frame BitaGreen as a replacement. The evaluation stays abstract.
What we see: No failure mode described. No mention of why spreadsheets, consultancies, or generic GIS tools fall short when planning nature-based climate adaptation at scale.
Buyer thinking:
"My current approach works well enough. I do not see a reason to change."
Without a failure frame, the buyer stays in their current approach by default. Inertia wins.
What we see: The platform page lists "What makes us different?" with four items: High-resolution, Speed, Scale, Minimum data requirements. These are capability claims without a clear mechanism that explains why BitaGreen produces better outcomes.
Buyer thinking:
"These are features. Every platform says they are fast and high-resolution. What do you do differently?"
Feature lists without a named mechanism do not survive internal comparison. The buyer cannot explain "why BitaGreen" to a colleague.
What we see: No measurable result on the homepage. No "City X reduced flood risk by Y%" or "Asset managers saved Z in remediation costs." The use cases (Leuven, Malta) are named but no outcomes are quantified.
Buyer thinking:
"I need a number to take to my director. 'Resilient, greener city' is not a budget justification."
Without a quantified result, the economic buyer has nothing to anchor a budget decision on. Deals stall at internal approval.